MIT Latest News
After three years of hard work, the MIT Solar Electric Vehicle Team took first place at the 2021 American Solar Challenge (ASC) on August 7 in the Single Occupancy Vehicle (SOV) category. During the five-day race, their solar car, Nimbus — designed and built entirely by students — beat eight other SOVs from schools across the country, traversing 1,109 miles and maintaining an average speed of 38.4 miles per hour.
Held every two years, the ASC has traditionally been a timed event. This year, however, the race was based on the total distance traveled. Each team followed the same prescribed route, from Independence, Missouri, to Las Vegas, New Mexico. But teams could drive additional miles within each of the three stages — if their battery had enough juice to continue. Nimbus surpassed the closest runner-up, the University of Kentucky, by over 100 miles.
“It’s still a little surreal,” says SEVT captain Aditya Mehrotra, a rising senior in electrical engineering and computer science. “We were all hopeful, but I don’t think you ever go into racing like, ‘We got this.’ It’s more like, ‘We’re going to do our best and see how we fare.’ In this case, we were fortunate enough to do really well. The car worked beautifully, and — more importantly — the team worked beautifully and we learned a lot.”
Team work makes the dream work
Two weeks before the ASC race, each solar car was put through its paces in the Formula Sun Grand Prix at Heartland Motorsports Park in Topeka, Kansas. First, vehicles had to perform a series of qualifying challenges, called “scrutineering.” Cars that passed could participate in a track race in hopes of qualifying for ASC. Nimbus placed second, completing a total of 239 laps around the track over three days (equivalent to 597.5 miles).
In the process, SEVT member and rising junior in mechanical engineering Cameron Kokesh tied the Illinois State driver for the fastest single lap time around the track, clocking in at three minutes and 19 seconds. She’s not one to rest on her laurels, though. “It would be fun to see if we could beat that time at the next race,” she says with a smile.
Nimbus’s performance at the Formula Sun Grand Prix and ASC is a manifestation of team’s proficiency in not only designing and building a superior solar vehicle, but other skills, as well, including managing logistics, communications, and teamwork. “It’s a huge operation,” says Mehrotra. “It’s not like we drive the car straight down the highway during the race.”
Indeed, Nimbus travels with an impressive caravan of seven vehicles manned by about two dozen SEVT members. A scout vehicle is at the front, monitoring road and weather conditions, followed by a lead car that oversees navigation. Nimbus is third in the caravan, trailed by a chase vehicle, in which the strategy team manages tasks like monitoring telemetry data, calculating how much power the solar panels are generating and the remaining travel distance, and setting target speeds. Bringing up the rear are the transport truck and trailer, a media car, and “Cupcake,” a support vehicle with food, supplies, and camping gear.
Leading up to the three-week event, the team devoted three years to designing, building, refining, and testing Nimbus. (The ASC was scheduled for 2020, but it was postponed until this year due to the Covid-19 pandemic.) They spent countless hours in the MIT Edgerton Center’s machine shop in Building N51, making, building, and iterating. They drove the car in the greater-Boston area, up to Salem, Massachusetts, and to Cape Cod. In the spring, they traveled to Palmer Motorsports Park in Palmer, Massachusetts, to practice various components of the race. They performed scrutineering tasks like the slalom test and figure eight test, conducted team operations training to optimize the caravan’s performance, and, of course, the “shakedown.”
“Shakedown is just, you drive the car around the track and you basically see what falls off and then you know what you need to fix,” Mehrotra explains. “Hopefully nothing too major falls off!”
The road ahead
At the conclusion of the race, Mehotra officially stepped down and handed SEVT’s reins to its new leaders: Kotesh will take the helm as team captain, and rising sophomore Sydney Kim, an ocean engineering major, will serve as vice-captain. The long drive back from the Midwest gave them time to reflect on the win and future plans.
Although Nimbus performed well, there were a few instructive glitches here and there, mostly during scrutineering. But there was nothing the team couldn’t handle. For example, the canopy latch didn’t always hold, so the clear acrylic bubble covering the driver would pop open. (A little spring adjustment and tape did the trick.) In addition, Nimbus had a tendency to skid when the driver slammed on the brakes. (Driver training, and letting some air out of the tires, improved the traction.)
Then there were the unpredictable variables, beyond the team’s control. On one day, with little sun, Nimbus had to chug along the highway at a mere 15 miles per hour. And there was the time that the Kansas State Police pulled the entire caravan over. “They didn’t realize we were coming through,” Mehrotra explains.
Kim thinks one of the keys to the team’s success is that Nimbus is quite reliable. “We didn’t have wheels falling off on the road. Once we got the car rolling, things didn’t go wrong mechanically or electrically. Also, it’s very energy efficient because it’s lightweight and the shape of the vehicle is very aerodynamic. On a nice sunny day, it allows us to drive 40 miles per hour energy-neutral — the battery stays at the same amount of charge as we drive,” she says.
The next ASC will take place in 2022, so this year the team will focus on refining Nimbus to race it again next summer. Also, they’ve set their sights on building a car to enter in the Multiple Occupancy Vehicle (MOV) class in the 2024 race — something the team has never done. “It will definitely take the three years to build a good car to compete,” Kotesh muses. “But it’s a really good transition period, after doing so well on this race, so our team is excited about it.”
“It will be challenging for them, but I wouldn’t put it anything past them,” says Patrick McAtamney, the Edgerton Center technical instructor and shop manager who works with all the student clubs and teams, from solar vehicles to Formula race cars to rockets. He attended ASC, too, and has the utmost admiration for SEVT. “It’s totally student-run. They do all the designing and machining themselves. I always tell people that sometimes I feel like my only job is to make sure they have 10 fingers when they leave the shop.”
In the meantime, before the school year begins, SEVT has another challenge: deciding where to put the trophy. “It’s huge,” McAtamney says. “It’s about the size of the Stanley Cup!”
Hastings is the Cecil and Ida Green Education Professor of Aeronautics and Astronautics, and head of the Department of Aeronautics and Astronautics, a role which he will continue in addition to his associate dean appointment. He will focus on advancing diversity, equity, and inclusion initiatives across the school and in collaboration with Nandi Bynoe, the School of Engineering’s Assistant Dean for Diversity, Equity and Inclusion, and the diversity officers within the school’s departments. As the current faculty lead of the School of Engineering’s DEI Committee, Hastings is already working with colleagues to ensure continued progress toward a diverse, equitable, and inclusive environment at all levels across the school.
Yang is the Gail E. Kendall Professor of Mechanical Engineering, faculty director for academics in the MIT D-Lab, and founder and director of MIT’s Ideation Lab. In her role as associate dean of engineering, she will focus on bolstering undergraduate and graduate academic programming and contributing to strategic initiatives at the school and Institute levels such as design, improving student experiences, and advancing opportunities for faculty support and mentoring, work that she has already begun through past and present department and Institute appointments.
“Maria and Dan have made incredible contributions to engineering education, and their individual and combined service to the school and the institute has been exceptional,” says Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am thrilled they have joined our leadership team, and I look forward to working closely with them each in these new roles.”
Hastings first joined the MIT community as a graduate student in 1976, after receiving his bachelor’s degree from Oxford University. He received his MS (1978) and PhD (1980) degrees in aeronautics and astronautics from MIT. He joined the MIT faculty in 1985.
Hastings’ contributions to MIT have been tremendous. He served as MIT’s dean of undergraduate education from 2006 to 2013, and in 2014, was appointed to a five-year term as the director of SMART, the Singapore-MIT Alliance for Research and Technology. He was appointed head of the Department of Aeronautics and Astronautics in 2018. In 2021, Hastings was appointed co-chair of MIT’s Values Statement Committee, a charge from Provost Martin Schmidt and former Chancellor Cynthia Barnhart, to engage the MIT community in the foundational work of developing a statement of shared values, one that is grounded in universal ideals but also speaks to MIT’s distinctive character and culture.
Hastings is a recognized leader whose research spans laser material interactions, fusion plasma physics, spacecraft plasma environment interactions, space plasma thrusters, and space systems analysis and design. Throughout his tenure he has taught courses in space environment interactions, rocket propulsion, advanced space power and propulsion systems, space policy and space systems engineering. In recognition for his special service of outstanding merit performed for the Institute, he was presented with MIT’s Gordon Billard Award in 2013.
Hastings has had an active career of service outside of MIT, and from 1997 to 1999, served as chief scientist of the United States Air Force. In this role, he was the chief scientific adviser to the chief of staff and the secretary and provided assessments on a wide range of scientific and technical issues affecting the Air Force mission.
In recognition for his service and his many contributions to aeronautics and astronautics research, Hastings has received numerous honors including: the Losey Atmospheric Sciences Award from the American Institute of Aeronautics and Astronautics (AIAA) in 2002, the Exceptional Service Award from the Air Force in 2008, and the Air Force Distinguished Civilian Award in both 1997 and 1999. He is a fellow (academician) of the International Astronautical Federation and the International Council in System Engineering, and an honorary fellow of the AIAA. He is also a member of the National Academy of Engineering.
Yang graduated from MIT with a bachelor’s in mechanical engineering (1991), after which she headed to Stanford University where she earned a master’s (1994) and PhD (2000) from the mechanical engineering department’s design division. She joined the MIT faculty in 2007.
Yang is an internationally recognized leader in design theory and design process, with a focus on the role of design representations. Her research considers early-stage processes used to create successful designs, from consumer products to complex, large-scale engineering systems. Yang has made significant advances in characterizing the relationship between design process and outcome. This work has been recognized by an NSF CAREER award, and in 2013, she was named an ASME Fellow in recognition of her engineering achievements.
With a focus on teaching students how to uncover ways to improve design in the world around them, Yang created 2.00 Introduction to Design, and has taught a number of other undergraduate courses including 2.00B (Toy Product Design) and 2.009 (Product Engineering Processes), and graduate courses 2.739/15.783 (Product Design and Development), in collaboration with the MIT Sloan School of Management and the Rhode Island School of Design, and 2.729/EC.729 (D-Lab: Design for Scale).
In recognition of her contributions to engineering education, Yang was named a 2017 MacVicar Faculty Fellow, in addition to being the recipient of multiple teaching awards including a 2016 Bose Award, the 2014 Ruth and Joel Spira Award, and a 2012 Earll M. Murman Award. She is the recipient of a 2014 ASEE Fred Merryfield Design Award, a 2014 Capers and Marion McDonald Mentoring Award, a 2013 ASME Design Theory and Methodology Best Paper Award, and a 2008 Robert N. Noyce Career Development Professorship.
From her days as an undergraduate to her role as a faculty member, Yang has made an indelible mark on the school of engineering and the Institute. In the department of mechanical engineering, she was the Faculty Ambassador to undergraduates until becoming Undergraduate Officer in the fall 2018. She has served as mechanical engineering’s Area Head for Design and Manufacturing and was the co-organizer of the Rising Stars in Mechanical Engineering in 2018. She was a member of the extended committee for the New Engineering Educational Transformation (NEET) program, and a member of the Faculty Advisory Board for the Technical Leadership Program. She served as the Chancellor’s Designated Representative for the Committee on Undergraduate Programs and was a member of the Corporation Joint Advisory Committee on Institute-Wide Affairs. She also recently co-chaired a cross-Institute faculty committee on the future of design at MIT, and currently serves on the faculty steering committee for the MIT Climate and Sustainability Consortium.
Hastings and Yang succeed Michael Cima and Anette “Peko” Hosoi. “I am tremendously grateful to Peko and Michael for their incredible contributions to their roles and the school,” says Chandrakasan.
Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering, had a deep commitment to mentoring and support and creating a nurturing and inclusive community in her role as associate dean of engineering. She was instrumental in the school’s faculty promotions and hiring searches, as well as developing opportunities for faculty mentoring and resources. She spearheaded the effort to launch the School of Engineering’s Faculty Gender Equity Committee, and was likewise an integral part of bolstering graduate student support and opportunities through her involvement in multiple fellowship selection processes. She has returned to her research, teaching, and advising within the department of mechanical engineering.
Cima, who also served as co-director of the MIT Innovation Initiative, was actively involved in efforts to bolster innovation across the school and to cultivate the next generation of inventors, which he achieved through a variety of roles including serving as faculty director of the Lemelson-MIT Program (a role that he will continue) and sitting on the advisory boards of the Bernard M. Gordon-MIT Engineering Leadership Program, MIT-BU Law Clinics, and the MIT Hong Kong Innovation Node. Cima’s support of educational opportunities also extended into the fellowship selection processes for the school, and identifying students for support in advancing discovery and innovation across various disciplines. Come September, he will return to his research, teaching, and advising within the department of materials science and engineering.
This year’s Bose Award for Excellence in Teaching has been presented to MIT Associate Professor Elsa Olivetti. Olivetti’s zest for enhancing the student experience is evident in the innovative and creative flare she brings to all aspects of her work.
“Professor Olivetti’s dedication to teaching is truly inspiring,” says Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “She has an extraordinary ability to engage her students, and has developed transformational approaches to curriculum and mentoring.”
Olivetti is the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering, and co-director of the MIT Climate and Sustainability Consortium. Her passion for addressing issues related to climate change frames the focus of her research, which centers on improving the environmental and economic sustainability of materials in the context of growing global demand. Her work focuses on reducing the significant burden of materials production and consumption through increased use of recycled and waste materials; informing the early-stage design of new materials for effective scale-up; and understanding the implications of policy, new technology development, and manufacturing processes on materials supply chains.
Olivetti has made significant contributions on education within the Department of Materials Science and Engineering since she came on board in 2014, including designing and implementing a subject on industrial ecology and materials, co-design of the Advanced Materials Machines NEET program, and developing a new undergraduate curriculum. Underscoring the care she has for her students’ success and well-being, Olivetti also cultivated the Course 3 Industry Seminars, pairing undergraduates with individuals working in careers related to 3D printing, environmental consulting, and manufacturing, with the aim of assisting her students with employment opportunities.
“Professor Olivetti is a brilliant teacher and a creative educator, who engages the classroom with an uncanny ability to keep students on the edge of their seats combined with a remarkable and signature style that creates learning moments they remember years later,” says Jeff Grossman, head of the Department of Materials Science and Engineering. “I am proud to have Elsa as a colleague, and I am delighted that her excellence has been recognized with the Bose Award.”
Olivetti received her PhD in materials science and engineering from MIT in 2007; shortly after, she joined the department as a postdoc. She subsequently worked as a research scientist in the Materials Systems Lab from 2009 to 2013 and joined the DMSE faculty in 2014. She was recently named a 2021 MacVicar Faculty Fellow in recognition of her exceptional commitment to curricular innovation, scientific research, and improving the student experience through teaching, mentoring, and advising. Previously, she has received the Earll M. Murman Award for Excellence in Undergraduate Advising in 2017, the award for “best DMSE advisor” in 2019, and the Paul Gray Award for Public Service in 2020.
The Bose Award for Excellence in Teaching is given annually to a faculty member whose contributions to education have been characterized by dedication, care, and creativity. Established in 1990 by the School of Engineering, the award stands as a tribute to the late Amar Bose, a professor of electrical engineering and computer science and the founder of the Bose Corporation, to recognize outstanding contributions to undergraduate education by members of its faculty.
It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.
The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.
The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.
Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.
“The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”
Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and co-founder and co-director of MIT’s Lorenz Center.
A volatile push
For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.
For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years.
“When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”
The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.
“This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”
“It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.
A warming multiplier
The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.
In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.
In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.
As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.
So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.
“Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”
“Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”
This research was supported, in part, by MIT’s School of Science.
Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.
MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.
“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.
Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.
An unusual path
Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.
During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.
Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”
After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.
He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.
From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”
He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.
Building internal models to make inferences
Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.
The problem of inferences presents itself in many behavioral settings.
“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.
Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.
Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.
Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.
More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.
Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.
“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”
On Aug. 5, the White House announced that it seeks to ensure that 50 percent of all new passenger vehicles sold in the United States by 2030 are powered by electricity. The purpose of this target is to enable the U.S to remain competitive with China in the growing electric vehicle (EV) market and meet its international climate commitments. Setting ambitious EV sales targets and transitioning to zero-carbon power sources in the United States and other nations could lead to significant reductions in carbon dioxide and other greenhouse gas emissions in the transportation sector and move the world closer to achieving the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius relative to preindustrial levels.
At this time, electrification of the transportation sector is occurring primarily in private light-duty vehicles (LDVs). In 2020, the global EV fleet exceeded 10 million, but that’s a tiny fraction of the cars and light trucks on the road. How much of the LDV fleet will need to go electric to keep the Paris climate goal in play?
To help answer that question, researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Energy Initiative have assessed the potential impacts of global efforts to reduce carbon dioxide emissions on the evolution of LDV fleets over the next three decades.
Using an enhanced version of the multi-region, multi-sector MIT Economic Projection and Policy Analysis (EPPA) model that includes a representation of the household transportation sector, they projected changes for the 2020-50 period in LDV fleet composition, carbon dioxide emissions, and related impacts for 18 different regions. Projections were generated under four increasingly ambitious climate mitigation scenarios: a “Reference” scenario based on current market trends and fuel efficiency policies, a “Paris Forever” scenario in which current Paris Agreement commitments (Nationally Determined Contributions, or NDCs) are maintained but not strengthened after 2030, a “Paris to 2 C” scenario in which decarbonization actions are enhanced to be consistent with capping global warming at 2 C, and an “Accelerated Actions” scenario the caps global warming at 1.5 C through much more aggressive emissions targets than the current NDCs.
Based on projections spanning the first three scenarios, the researchers found that the global EV fleet will likely grow to about 95-105 million EVs by 2030, and 585-823 million EVs by 2050. In the Accelerated Actions scenario, global EV stock reaches more than 200 million vehicles in 2030, and more than 1 billion in 2050, accounting for two-thirds of the global LDV fleet. The research team also determined that EV uptake will likely grow but vary across regions over the 30-year study time frame, with China, the United States, and Europe remaining the largest markets. Finally, the researchers found that while EVs play a role in reducing oil use, a more substantial reduction in oil consumption comes from economy-wide carbon pricing. The results appear in a study in the journal Economics of Energy & Environmental Policy.
“Our study shows that EVs can contribute significantly to reducing global carbon emissions at a manageable cost,” says MIT Joint Program Deputy Director and MIT Energy Initiative Senior Research Scientist Sergey Paltsev, the lead author. “We hope that our findings will help decision-makers to design efficient pathways to reduce emissions.”
To boost the EV share of the global LDV fleet, the study’s co-authors recommend more ambitious policies to mitigate climate change and decarbonize the electric grid. They also envision an “integrated system approach” to transportation that emphasizes making internal combustion engine vehicles more efficient, a long-term shift to low- and net-zero carbon fuels, and systemic efficiency improvements through digitalization, smart pricing, and multi-modal integration. While the study focuses on EV deployment, the authors also stress for the need for investment in all possible decarbonization options related to transportation, including enhancing public transportation, avoiding urban sprawl through strategic land-use planning, and reducing the use of private motorized transport by mode switching to walking, biking, and mass transit.
This research is an extension of the authors’ contribution to the MIT Mobility of the Future study.
The Department of Chemistry’s state-of-the-art Undergraduate Teaching Lab (UGTL), which opened on the fifth floor of MIT.nano in fall 2018, is home to 69 fume hoods. The hoods, ranging from four to seven feet wide, protect students and staff from potential exposure to hazardous materials while working in the lab. Fume hoods represent a tremendous energy consumption on the MIT campus; in addition to the energy required to operate them, the air that replaces what is exhausted must be heated or cooled. Thus, any lab with a large number of fume hoods is destined to be faced with high operational energy cost.
“When the UGTL’s fume hoods are in use, the air-change rates — the number of times fresh air is exchanged in the space in a given time frame — averages between 25 and 30 air changes per hour (ACH),” says Nicole Imbergamo, senior sustainability project manager in MIT Campus Construction. “When the lab is unoccupied, that air-change rate averages 11 ACH. For context, in a laboratory with a single fume hood, typically MIT’s EHS [Environment, Health, and Safety] department would require six ACH when occupied and four ACH when unoccupied. Hibernation of the fume hoods allowed us to close the gap between the current unoccupied air-change rate and what is typical on campus in a non-teaching lab environment.”
Fifty-eight of the 69 fume hoods in the UGTL are consistently unused between the hours of 6:30 p.m. and 12 p.m., as well as all weekend long, totaling 135 hours per week. Based on these numbers, the team determined it was safe to “hibernate” the fume hoods during the off hours, saving the Institute on fan energy and the cost of heating and cooling the air that gets flushed into each hood.
John Dolhun PhD ’73 is the director of the UGTL. “The project started when MIT Green Labs — a division of the Environment, Health, and Safety Office now known as the Safe & Sustainable Labs Program — contacted the UGTL in October 2018, followed by an initial meeting in November 2018 with all the key players, including Safe and Sustainable Labs, the EHS Office, the Department of Facilities, and the Department of Chemistry,” says Dolhun. “It was during these initial discussions that the UGTL recognized this was something we had to do. The project was completed in April 2021.”
Now, through a scheduled time clock in the Building Management System (BMS), the 58 fume hoods are flipped into hibernation mode at the end of each day. “In hibernation mode, the exhaust air valves go to their minimum airflow, which is lower than a fume hood minimum required when in use,” says Imbergamo. “As a safety feature, if the sash of a fume hood is opened while it is in standby mode, the valve and hood are automatically released from hibernation until the next scheduled time.” The BMS allows Dolhun and all with access to instantly view the hibernation status of every hood online, at any time, from any location. As an additional safety measure, the lab is equipped with an emergency kill switch that, when activated, instantly takes all 58 fume hoods out of hibernation, increasing the air changes per hour by about 37 percent, at one touch.
The MIT operations team worked with the building controls vendor to create graphics that allow the UGTL users to easily see the hood sash positions and their current status as either hibernated or in normal operating mode. This virtual visibility allows the UGTL team to confirm the hoods are all closed before leaving the lab at the end of each day, and to confirm the energy reductions. This visual access also lends itself to educating the students on the importance of closing the sash at the end of their lab work, and gives an opportunity for educating the students on relevant fume hood management best practices that will serve them far beyond their undergraduate chemistry classes.
Since employing the use of hibernation mode, the unoccupied UGTL air change rate has plummeted from 11 ACH to seven ACH, drastically shrinking unnecessary energy outflow, saving MIT an estimated $21,000 per year. The annual utility cost savings of both reduced supply and exhaust fan energy, as well as the heating and cooling required of the supply air to the space, will result in a less-than three-year payback for MIT. The overall success of the hood hibernation program, and the savings that it has afforded the UGTL, is very motivational for the Green Initiative. The highlights of this system will be shared with other labs, both at MIT and beyond, that may also benefit from similar adjustments.
The spiderweb is an everyday architecture — non-monumental and easily overlooked. Yet artists and scientists are working to unlock the secret of its complex geometry, a mystery that could inspire everything from resilient new building materials to deeper understandings of the structure of the universe.
When artist Tomàs Saraceno first came to MIT in 2012, as the inaugural Center for Art, Science, & Technology (CAST) visiting artist, he had recently pioneered a new method of scanning 3D webs with researchers at TU Darmstadt in Germany using sheet lasers. Inspired by the idea that the early superstructure of the universe might have resembled a spider's web, he then used these images to create the 2010 installation “14 Billions (Working Title),” a hand-knotted reconstruction amplified to 17 times the web’s original size.
Meanwhile, materials scientist Markus Buehler, the McAfee Professor of Engineering at MIT, had been studying orb webs — the flat, radial Halloween staples — for years, analyzing how the strong-yet-flexible silk might inspire new building materials. He had long been interested in the intersection of materials and music. Using an approach to mathematics called category theory, he showed how natural hierarchical materials like spider silk exhibit properties comparable to various forms of music, in terms of their hierarchical structure and function. From this research, he developed a music based on the structure of silk. But, until then, he had never attempted to model a 3D web. “We always wanted to work on 3D webs,” he said, “but didn’t have accurate models of such complex structures.”
Soon, a collaboration was born. By 2014, Buehler’s lab, with special efforts from postdoc Zhao Qin and graduate student Bogda Demian, had created a computer model and simulation of the data generated by Saraceno’s scans for the “14 Billions” project, which they presented at a panel discussion at the MIT Museum. For the first time, they could not only accurately visualize the web but replicate its internal structure, gaining precise information about every single silk thread — the thicknesses, tensions, and lengths — and how they interacted to create such an elaborate architecture. This new analytical model developed by the lab gave rise to a new approach to studying the webs, and the applications were endless.
Saraceno later brought a tent-web spider/web, said to have inspired architect Frei Otto's design of the 1972 Olympic stadium in Munich, to the MIT lab. Installed in a carbon frame, the Cyrtophora citricola proceeded to spin a web for the researchers to document in situ. “This collaboration becomes an engine for ongoing speculation about the 'umwelten' of spider/webs, opening up new possibilities for experimentation in methods, scales and techniques for interspecies relation,” said Ally Bisshop, a researcher in Saraceno’s Arachnophilia Research Laboratory, which leads the studio’s community and interdisciplinary research project Arachnophilia.
Over the years, Buehler and Saraceno refined the three-dimensional models, using increasingly advanced imaging and simulation techniques. In the near decade that followed, with the support of CAST, their model of the spiderweb has led to an enormous array of undertakings, both together and with their respective labs: a digital web archive, a virtual reality simulation, live musical performances and “cosmic jam” sessions with spiders and their webs, multiple peer-reviewed scientific research papers on the spiderweb’s structural and mechanical properties, sonifications of spider silk proteins, 3D-printed spider silk, and an app for citizen arachnophiles that launched the 2019 Venice Biennale, among other projects. At times, the collaborations themselves seem to resemble a three-dimensional web: complex forms, knotty with unexpected and intricate connections, surprisingly enduring, and often very beautiful.
The spider web as musical instrument
The structure of the spiderweb also inspired many new musical pieces. For one, Buehler and his team developed a granular synthesis technique that mimicked the biochemical process of silk production. Just as silk is composed of molecular components, granular synthesis uses grains of sound that are then assembled into a new form. More recently, Buehler has combined this kind of web sonification with his molecular music, overlaying frequencies and melodies extracted from the proteins that make up silk, as well as other key features of spiders, such as the venom molecules. In 2020, Buehler produced the video, “The Oracle of the Virus, the Spider,” for Studio Saraceno that displayed his work exposing spiders to vibrations.
Another spinoff emerged when composer and clarinetist Evan Ziporyn, faculty director of CAST, made a pit stop at Saraceno’s studio in 2015 while passing through Berlin. They had gotten to know each other’s work during Saraceno’s visits to MIT. “Tomàs always had this dream of a collective instrument based on the spiderweb,” says Ziporyn. “It was a kind of cosmic drum circle, as he conceived it.” In the warren of old warehouse buildings, Saraceno’s studio was a freewheeling ensemble of designers, architects, anthropologists, biologists, engineers, art historians, curators, and musicians. “It had a crazy sort of Andy Warhol Factory meets Eastern European startup vibe,” Ziporyn says. On the day Ziporyn arrived, the spider was unusually active, Saraceno informed him. He soon found himself improvising with his bass clarinet alongside the spider, whose movements were amplified by custom-made piezo mics that were developed through the years at Studio Saraceno.
A few years later, in 2018, Saraceno invited Ziporyn to perform as part of his “ON AIR” exhibition at the Palais de Tokyo in Paris. Ziporyn’s earlier impromptu performance, along with other interspecies musical experiments, had already been included in the studio’s exhibition, “Arachnid Orchestra. Jam Sessions,” and they wanted to extend the ideas even further. “It struck me that the web itself — thousands of strings — could be a kind of harp. So this connected back to Tomàs's original idea years ago of building a spider instrument," says Ziporyn. Next, he recruited a team of co-creators — Isabelle Su, Ian Hattwick, and Christine Southworth — to further develop the project, while Buehler’s lab developed the model and codes, including the interactive tools, for the piece that was later performed.
The resulting collaborative work, “Arachnodrone/Spider's Canvas,” used a 3D model of a scanned Cyrtophora citricola web to construct an immersive, interactive soundscape, translating the structure into visual and sonic form. In the piece, Su, a doctoral student in Buehler’s lab who was recruited after attending Buehler and Saraceno’s 2014 talk at the MIT Museum, leads audiences through a virtual version of the web. Hattwick, a sound artist and lecturer In music technology at MIT, digitally sculpts the sonic information, while Ziporyn and Southworth add sonic textures using guitars, electronic wind instruments, eBows, and real-time signal processing. The project resulted in another peer-reviewed research paper, as well as a new work-in-progress by Ziporyn and Southworth, supported by the MAP Fund and a School of Humanities, Arts, and Social Sciences Digital Humanities Fellowship, based on the structure of snowflakes.
For Ziporyn, the experience provided a new opportunity for creative improvisation, with each member of the ensemble improvising alongside one another in their own unique medium. “Each performance is a newly generated thing,” he says. “It's never the same. We never know what we're going to get. It's working out to be a real interactive improvisation. That to me is the essence of artistic interdisciplinarity.”
Building a web over time
This summer, Buehler served as the primary investigator for a new scientific study, published in PNAS and entitled “In-situ Three-Dimensional Spider Web Construction and Mechanics.” Other authors included Saraceno and members of his studio, and MIT undergraduates Neosha Narayanan and Marcos A. Logrono. By automating their web-scanning method with the help of complex computational algorithms, researchers were able to study not only completed webs, but ones still under construction. The computer model and simulations allowed them to study the web without destroying it. The lab found that the web, like the collaboration, is an object that transforms over time: as it grew in density, the web became stronger and tougher. After creating the initial foundation, a spider continues to make changes: adding, reinforcing, and repairing.
In addition, Buehler developed methods not only to model the spiderwebs, but also created physical artifacts using additive manufacturing methods: a resin-based printing that transforms a liquid into a solid, similar to the way a spider spins silk in nature, as well as a fused-deposition molding that uses the heating and cooling of polymers. Learning more about webs may inspire new self-sufficient and self-repairable smart structures. “We were just really blown away by the fact that there is a lot of internal structure,” says Buehler. “Some are like meshes, tunnels, less-organized regions of the web. From a materials perspective, that’s very, very interesting, because it gives you a lot of design ideas.”
The webs, Buehler found, are living materials — tunable, changeable, and responsive, thanks to the spider’s ability to alter the web over time. This is a future direction for the design of materials, as engineers are now trying to generate materials that are more alive than static to serve multiple and varied purposes, allowing for a sustainable life cycle that facilitates reuse.
Creating the infrastructure for collaboration
Residencies at CAST are open-ended, organized more around the free pursuit of ideas than any predefined product, and, as a result, particularly in the case of Saraceno, they often lead to wild and unexpected outcomes. While initial visits are exploratory, involving tours of labs and conversations over coffee, many ambitious projects — by artists such as Lara Baladi, Karim Ben Khelifa, Agnieszka Kurant, Jamshied Sharifi, and Anicka Yi — incubate over a period of months and years. What begins as R&D at MIT often grows into large-scale exhibitions at venues like the San Francisco Museum of Modern Art, the Guggenheim, and The Shed.
At times, these far-ranging and multifaceted projects challenge conventional ideas about how work gets made. “These collaborations scramble the traditional understandings of authorship in different disciplines — from individual, co-authorship, collectives, ensembles, co-creation, spinoffs. None of these existing conventions quite capture the nature of the work and its various components,” says Leila Kinney, executive director of CAST.
The webs offer us new ways of thinking about the world, and our interactions within it. “Museological collections tend to focus on the spider in isolation of its web. We argue for the importance of attending to the material architectures (webs) that connect the spider to the world,” writes Studio Saraceno. And while artworks have historically been treated as a static object produced by single individuals, this collaborative and ongoing creative work demonstrates how the web — the complex infrastructural network that supports the spider — is equally significant as the animal itself.
Duchenne muscular dystrophy (DMD), a rare genetic disease usually diagnosed in young boys, gradually weakens muscles across the body until the heart or lungs fail. Symptoms often show up by age 5; as the disease progresses, patients lose the ability to walk around age 12. Today, the average life expectancy for DMD patients hovers around 26.
It was big news, then, when Cambridge, Massachusetts-based Sarepta Therapeutics announced in 2019 a breakthrough drug that directly targets the mutated gene responsible for DMD. The therapy uses antisense phosphorodiamidate morpholino oligomers (PMO), a large synthetic molecule that permeates the cell nucleus in order to modify the dystrophin gene, allowing for production of a key protein that is normally missing in DMD patients. “But there’s a problem with PMO by itself. It’s not very good at entering cells,” says Carly Schissel, a PhD candidate in MIT’s Department of Chemistry.
To boost delivery to the nucleus, researchers can affix cell-penetrating peptides (CPPs) to the drug, thereby helping it cross the cell and nuclear membranes to reach its target. Which peptide sequence is best for the job, however, has remained a looming question.
MIT researchers have now developed a systematic approach to solving this problem by combining experimental chemistry with artificial intelligence to discover nontoxic, highly-active peptides that can be attached to PMO to aid delivery. By developing these novel sequences, they hope to rapidly accelerate the development of gene therapies for DMD and other diseases.
Results of their study have now been published in the journal Nature Chemistry in a paper led by Schissel and Somesh Mohapatra, a PhD student in the MIT Department of Materials Science and Engineering, who are the lead authors. Rafael Gomez-Bombarelli, assistant professor of materials science and engineering, and Bradley Pentelute, professor of chemistry, are the paper’s senior authors. Other authors include Justin Wolfe, Colin Fadzen, Kamela Bellovoda, Chia-Ling Wu, Jenna Wood, Annika Malmberg, and Andrei Loas.
“Proposing new peptides with a computer is not very hard. Judging if they’re good or not, this is what’s hard,” says Gomez-Bombarelli. “The key innovation is using machine learning to connect the sequence of a peptide, particularly a peptide that includes non-natural amino acids, to experimentally-measured biological activity.”
CPPs are relatively short chains, made up of between five and 20 amino acids. While one CPP can have a positive impact on drug delivery, several linked together have a synergistic effect in carrying drugs over the finish line. These longer chains, containing 30 to 80 amino acids, are called miniproteins.
Before a model could make any worthwhile predictions, researchers on the experimental side needed to create a robust dataset. By mixing and matching 57 different peptides, Schissel and her colleagues were able to build a library of 600 miniproteins, each attached to PMO. With an assay, the team was able to quantify how well each miniprotein could move its cargo across the cell.
The decision to test the activity of each sequence, with PMO already attached, was important. Because any given drug will likely change the activity of a CPP sequence, it is difficult to repurpose existing data, and data generated in a single lab, on the same machines, by the same people, meet a gold standard for consistency in machine-learning datasets.
One goal of the project was to create a model that could work with any amino acid. While only 20 amino acids naturally occur in the human body, hundreds more exist elsewhere — like an amino acid expansion pack for drug development. To represent them in a machine-learning model, researchers typically use one-hot encoding, a method that assigns each component to a series of binary variables. Three amino acids, for example, would be represented as 100, 010, and 001. To add new amino acids, the number of variables would need to increase, meaning researchers would be stuck having to rebuild their model with each addition.
Instead, the team opted to represent amino acids with topological fingerprinting, which is essentially creating a unique barcode for each sequence, with each line in the barcode denoting either the presence or absence of a particular molecular substructure. “Even if the model has not seen [a sequence] before, we can represent it as a barcode, which is consistent with the rules that model has seen,” says Mohapatra, who led development efforts on the project. By using this system of representation, the researchers were able to expand their toolbox of possible sequences.
The team trained a convolutional neural network on the miniprotein library, with each of the 600 miniproteins labeled with its activity, indicating its ability to permeate the cell. Early on, the model proposed miniproteins laden with arginine, an amino acid that tears a hole in the cell membrane, which is not ideal to keep cells alive. To solve this issue, researchers used an optimizer to decentivize arginine, keeping the model from cheating.
In the end, the ability to interpret predictions proposed by the model was key. “It’s typically not enough to have a black box, because the models could be fixating on something that is not correct, or because it could be exploiting a phenomenon imperfectly,” Gomez-Bombarelli says.
In this case, researchers could overlay predictions generated by the model with the barcode representing sequence structure. “Doing that highlights certain regions that the model thinks play the biggest role in high activity,” Schissel says. “It's not perfect, but it gives you focused regions to play around with. That information would definitely help us in the future to design new sequences empirically.”
Ultimately, the machine-learning model proposed sequences that were more effective than any previously known variant. One in particular can boost PMO delivery by 50-fold. By injecting mice with these computer-suggested sequences, the researchers validated their predictions and demonstrated that the miniproteins are nontoxic.
It is too early to tell how this work will affect patients down the line, but better PMO delivery will be beneficial in several ways. If patients are exposed to lower levels of the drug, they may experience fewer side effects, for example, or require less-frequent doses (PMO is administered intravenously, often on a weekly basis). The treatment may also become less costly. As a testament to the concept, recent clinical trials demonstrated that a proprietary CPP from Sarepta Therapeutics could decrease exposure to PMO by 10-fold. Also, PMO is not the only drug that stands to be improved by miniproteins. In additional experiments, the model-generated miniproteins carried other functional proteins into the cell.
Noticing a disconnect between the work of machine-learning researchers and experimental chemists, Mohapatra has posted the model on GitHub, along with a tutorial for experimentalists who have their own list of sequences and activities. He notes that over a dozen people from across the world have adopted the model so far, repurposing it to make their own powerful predictions for a wide range of drugs.
The research was supported by the MIT Jameel Clinic, Sarepta Therapeutics, the MIT-SenseTime Alliance, and the National Science Foundation.
If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.
But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.
Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.
The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.
“At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later,” says Ezra Tal, a graduate student in MIT’s Department of Aeronautics and Astronautics. “It’s this holistic approach we use to see how we can make a trajectory overall as fast as possible.”
“These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast,” adds Sertac Karaman, associate professor of aeronautics and astronautics and director of the Laboratory for Information and Decision Systems at MIT. “We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow.”
Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their results in the International Journal of Robotics Research.
Training drones to fly around obstacles is relatively straightforward if they are meant to fly slowly. That’s because aerodynamics such as drag don’t generally come into play at low speeds, and they can be left out of any modeling of a drone’s behavior. But at high speeds, such effects are far more pronounced, and how the vehicles will handle is much harder to predict.
“When you’re flying fast, it’s hard to estimate where you are,” Ryou says. “There could be delays in sending a signal to a motor, or a sudden voltage drop which could cause other dynamics problems. These effects can’t be modeled with traditional planning approaches.”
To get an understanding for how high-speed aerodynamics affect drones in flight, researchers have to run many experiments in the lab, setting drones at various speeds and trajectories to see which fly fast without crashing — an expensive, and often crash-inducing training process.
Instead, the MIT team developed a high-speed flight-planning algorithm that combines simulations and experiments, in a way that minimizes the number of experiments required to identify fast and safe flight paths.
The researchers started with a physics-based flight planning model, which they developed to first simulate how a drone is likely to behave while flying through a virtual obstacle course. They simulated thousands of racing scenarios, each with a different flight path and speed pattern. They then charted whether each scenario was feasible (safe), or infeasible (resulting in a crash). From this chart, they could quickly zero in on a handful of the most promising scenarios, or racing trajectories, to try out in the lab.
“We can do this low-fidelity simulation cheaply and quickly, to see interesting trajectories that could be both fast and feasible. Then we fly these trajectories in experiments to see which are actually feasible in the real world,” Tal says. “Ultimately we converge to the optimal trajectory that gives us the lowest feasible time.”
Going slow to go fast
To demonstrate their new approach, the researchers simulated a drone flying through a simple course with five large, square-shaped obstacles arranged in a staggered configuration. They set up this same configuration in a physical training space, and programmed a drone to fly through the course at speeds and trajectories that they previously picked out from their simulations. They also ran the same course with a drone trained on a more conventional algorithm that does not incorporate experiments into its planning.
Overall, the drone trained on the new algorithm “won” every race, completing the course in a shorter time than the conventionally trained drone. In some scenarios, the winning drone finished the course 20 percent faster than its competitor, even though it took a trajectory with a slower start, for instance taking a bit more time to bank around a turn. This kind of subtle adjustment was not taken by the conventionally trained drone, likely because its trajectories, based solely on simulations, could not entirely account for aerodynamic effects that the team’s experiments revealed in the real world.
The researchers plan to fly more experiments, at faster speeds, and through more complex environments, to further improve their algorithm. They also may incorporate flight data from human pilots who race drones remotely, and whose decisions and maneuvers might help zero in on even faster yet still feasible flight plans.
“If a human pilot is slowing down or picking up speed, that could inform what our algorithm does,” Tal says. “We can also use the trajectory of the human pilot as a starting point, and improve from that, to see, what is something humans don’t do, that our algorithm can figure out, to fly faster. Those are some future ideas we’re thinking about.”
This research was supported, in part, by the U.S. Office of Naval Research.
Every year, more than 200 million people are infected with malaria, and nearly 500,000 die from the disease. Existing drugs can treat the infection, but the parasite that causes the disease has evolved resistance to many of them.
To help overcome that resistance, scientists are now searching for drugs that hit novel molecular targets within the Plasmodium falciparum parasite that causes malaria. An international team that includes MIT researchers has identified a potential new target: the acetyl-CoA synthetase, an enzyme that is necessary for the parasite’s survival. They found that two promising compounds that were identified in a large-scale drug screen in 2018 appear to block this enzyme.
The findings suggest that these compounds, or similar molecules that hit the same target, could eventually be developed as effective malaria drugs, the researchers say.
“These compounds provide a possible starting point for optimization, and an understanding that the target is druggable, potentially by other molecules with desirable pharmacological properties,” says Jacquin Niles, a professor of biological engineering at MIT, director of the MIT Center for Environmental Health Sciences, and a senior author of the study along with Dyann Wirth, the Richard Pearson Strong Professor of Infectious Disease at the Harvard T.H. Chan School of Public Health and institute member of the Broad Institute of MIT and Harvard.
Beatriz Baragana, a medicinal chemist at the University of Dundee, and Amanda Lukens, a senior research scientist at the Broad Institute of MIT and Harvard, are communicating authors of the study, which appears in Cell Chemical Biology. The lead authors are Charisse Flerida Pasaje, a senior postdoc at MIT; Robert Summers, a postdoc at the Harvard T.H. Chan School of Public Health; and Joao Pisco from the University of Dundee.
Mechanism of action
The new study grew out of the Malaria Drug Accelerator (MalDA), an international consortium of infectious disease experts from universities and pharmaceutical companies that are seeking new drugs for malaria, funded by the Bill and Melinda Gates Foundation.
“The mandate of the group is to come up with new antimalarial targets that are good candidates for drug development,” Niles says. “We have had some really effective antimalarial drugs, but eventually resistance becomes an issue, so a big challenge is finding the next effective drug without immediately running into cross-resistance problems.”
The group’s previous screens have uncovered many candidate drugs. In the new study, the team set out to try to discover the targets of two compounds that emerged from their 2018 screen. “Understanding the mechanism of such drug candidates can help researchers during optimization and uncover potential drawbacks early in the process,” Niles says.
The researchers used several experimental techniques to discover the target of the two compounds. In one set of experiments, they generated resistant versions of Plasmodium falciparum by repeatedly exposing them to the drugs. Then they sequenced the genomes of these parasites, which revealed that mutations in an enzyme called acetyl-CoA synthetase helped them to become resistant.
Other studies, including metabolic profiling, genome editing, and differential sensitization using conditional knockdown of target protein expression, confirmed that this enzyme is inhibited by the two compounds. Acetyl-CoA synthetase is an enzyme that catalyzes the production of acetyl-CoA, a molecule that is involved in many cellular functions, including regulation of gene expression. The researchers’ studies suggested that one of the drug candidates binds to the enzyme’s binding site for acetate, while the other blocks the binding site for CoA.
The researchers also found that in Plasmodium falciparum cells, acetyl-CoA synthetase is located primarily in the nucleus. This and other evidence led them to conclude that the enzyme is involved in histone acetylation. This process allows cells to regulate which genes they express by transferring acetyl groups from acetyl-CoA onto histone proteins, the spools around which DNA winds.
The Niles and Wirth labs are now investigating how compounds that interfere with histone acetylation might disrupt gene regulation in the parasite, and how such disruption could lead to parasite death.
None of the currently approved malaria drugs target acetyl-CoA synthetase, and it appears that the identified compounds preferentially bind to the version of the enzyme found in the malaria parasite, making it a good potential drug candidate, the researchers say.
“Further studies need to be carried out to assess their potency against human cell lines, but these are promising compounds, and acetyl-CoA synthetase is an attractive target to push forward into the antimalarial drug discovery pipeline,” Pasaje says.
The compounds can also kill Plasmodium falciparum at multiple stages of its life cycle, including the stages when it infects human liver cells and red blood cells. Most existing drugs target only the form of the parasite that infects red blood cells.
Members of the MalDA consortium at the University of Dundee are working on screening compound libraries to identify additional candidates that have similar mechanisms of action as the two recently discovered compounds and may have more desirable pharmaceutical properties.
“Ideally, there will be an opportunity to examine several potential scaffolds in parallel early, to then choose the most promising candidate(s) for optimization towards use in humans,” Niles says.
The research was funded in part by the Gates Foundation, the Global Health Technology Fund, and the Medicines for Malaria Venture.
The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.
MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives.
Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data.
“There are previous systems that can solve various fairness questions. Our system is not the first; but because our system is specialized and optimized for a certain class of models, it can deliver solutions thousands of times faster,” says Feras Saad, a PhD student in electrical engineering and computer science (EECS) and first author on a recent paper describing the work. Saad adds that the speedups are not insignificant: The system can be up to 3,000 times faster than previous approaches.
SPPL gives fast, exact solutions to probabilistic inference questions such as "How likely is the model to recommend a loan to someone over age 40?" or "Generate 1,000 synthetic loan applicants, all under age 30, whose loans will be approved." These inference results are based on SPPL programs that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include "Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?" or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?"
SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. In contrast, other probabilistic programming languages such as Gen and Pyro allow users to write down probabilistic programs where the only known ways to do inference are approximate — that is, the results include errors whose nature and magnitude can be hard to characterize.
Error from approximate probabilistic inference is tolerable in many AI applications. But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis.
Jean-Baptiste Tristan, associate professor at Boston College and former research scientist at Oracle Labs, who was not involved in the new research, says, "I've worked on fairness analysis in academia and in real-world, large-scale industry settings. SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine."
SPPL avoids errors by restricting to a carefully designed class of models that still includes a broad class of AI algorithms, including the decision tree classifiers that are widely used for algorithmic decision-making. SPPL works by compiling probabilistic programs into a specialized data structure called a "sum-product expression." SPPL further builds on the emerging theme of using probabilistic circuits as a representation that enables efficient probabilistic inference. This approach extends prior work on sum-product networks to models and queries expressed via a probabilistic programming language. However, Saad notes that this approach comes with limitations: “SPPL is substantially faster for analyzing the fairness of a decision tree, for example, but it can't analyze models like neural networks. Other systems can analyze both neural networks and decision trees, but they tend to be slower and give inexact answers."
"SPPL shows that exact probabilistic inference is practical, not just theoretically possible, for a broad class of probabilistic programs," says Vikash Mansinghka, an MIT principal research scientist and senior author on the paper. "In my lab, we've seen symbolic inference driving speed and accuracy improvements in other inference tasks that we previously approached via approximate Monte Carlo and deep learning algorithms. We've also been applying SPPL to probabilistic programs learned from real-world databases, to quantify the probability of rare events, generate synthetic proxy data given constraints, and automatically screen data for probable anomalies.”
The new SPPL probabilistic programming language was presented in June at the ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), in a paper that Saad co-authored with MIT EECS Professor Martin Rinard and Mansinghka. SPPL is implemented in Python and is available open source.
Inspired by the sticky substance that barnacles use to cling to rocks, MIT engineers have designed a strong, biocompatible glue that can seal injured tissues and stop bleeding.
The new paste can adhere to surfaces even when they are covered with blood, and can form a tight seal within about 15 seconds of application. Such a glue could offer a much more effective way to treat traumatic injuries and to help control bleeding during surgery, the researchers say.
“We are solving an adhesion problem in a challenging environment, which is this wet, dynamic environment of human tissues. At the same time, we are trying to translate this fundamental knowledge into real products that can save lives,” says Xuanhe Zhao, a professor of mechanical engineering and civil and environmental engineering at MIT and one of the senior authors of the study.
Christoph Nabzdyk, a cardiac anesthesiologist and critical care physician at the Mayo Clinic in Rochester, Minnesota, is also a senior author of the paper, which appears today in Nature Biomedical Engineering. MIT Research Scientist Hyunwoo Yuk and postdoc Jingjing Wu are the lead authors of the study.
Finding ways to stop bleeding is a longstanding problem that has not been adequately solved, Zhao says. Sutures are commonly used to seal wounds, but putting stitches in place is a time-consuming process that usually isn’t possible for first responders to perform during an emergency situation. Among members of the military, blood loss is the leading cause of death following a traumatic injury, and among the general population, it is the second leading cause of death following a traumatic injury.
In recent years, some materials that can halt bleeding, also called hemostatic agents, have become commercially available. Many of these consist of patches that contain clotting factors, which help blood to clot on its own. However, these require several minutes to form a seal and don’t always work on wounds that are bleeding profusely.
Zhao’s lab has been working to address this problem for several years. In 2019, his team developed a double-sided tissue tape and showed that it could be used to close surgical incisions. This tape, inspired by the sticky material that spiders use to capture their prey in wet conditions, includes charged polysaccharides that can absorb water from a surface almost instantaneously, clearing off a small dry patch that the glue can adhere to.
For their new tissue glue, the researchers once again drew inspiration from the natural world. This time, they focused their attention on the barnacle, a small crustacean that attaches itself to rocks, ship hulls, and even other animals such as whales. These surfaces are wet and often dirty — conditions that make adhesion difficult.
“This caught our eye,” Yuk says. “It's very interesting because to seal bleeding tissues, you have to fight with not only wetness but also the contamination from this outcoming blood. We found that this creature living in a marine environment is doing exactly the same thing that we have to do to deal with complicated bleeding issues.”
The researchers’ analysis of barnacle glue revealed that it has a unique composition. The sticky protein molecules that help barnacles attach to surfaces are suspended in an oil that repels water and any contaminants found on the surface, allowing the adhesive proteins to attach firmly to the surface.
The MIT team decided to try to mimic this glue by adapting an adhesive they had previously developed. This sticky material consists of a polymer called poly(acrylic acid) embedded with an organic compound called an NHS ester, which provides adhesion, and chitosan, a sugar that strengthens the material. The researchers froze sheets of this material, ground it into microparticles, and then suspended those particles in medical grade silicone oil.
When the resulting paste is applied to a wet surface such as blood-covered tissue, the oil repels the blood and other substances that may be present, allowing the adhesive microparticles to crosslink and form a tight seal over the wound. Within 15 to 30 seconds of applying the glue, with gentle pressure applied, the glue sets and bleeding stops, the researchers showed in tests in rats.
One advantage of this new material over the double-sided tape the researchers designed in 2019 is that the paste can be molded to fit irregular wounds, while tape could be better suited to sealing surgical incisions or attaching medical devices to tissues, the researchers say. “The moldable paste can flow in and fit any irregular shape and seal it,” Wu says. “This gives freedom to the users to adapt it to irregular-shaped bleeding wounds of all kinds.”
Better bleeding control
In tests in pigs, Nabzdyk and his colleagues at the Mayo Clinic found that the glue was able to rapidly stop bleeding in the liver, and it worked much faster and more effectively than the commercially available hemostatic agents that they compared it to. It even worked when strong blood thinners (heparin) were given to the pigs so that the blood did not form clots spontaneously.
Their studies showed that the seal remains intact for several weeks, giving the tissue below time to heal itself, and that the glue induced little inflammation, similar to that produced by currently used hemostatic agents. The glue is slowly resorbed within the body over months, and it can also be removed earlier by applying a solution that dissolves it, if surgeons need to go in after the initial application to repair the wound.
The researchers now plan to test the glue on larger wounds, which they hope will demonstrate that the glue would be useful to treat traumatic injuries. They also envision that it could be useful during surgical procedures, which often require surgeons to spend a great deal of time controlling bleeding.
“We’re technically capable of carrying out a lot of complicated surgeries, but we haven’t really advanced as fast in the ability to control especially severe bleeding expeditiously,” Nabzdyk says.
Another possible application would be to help stop bleeding that occurs in patients who have plastic tubes inserted into their blood vessels, such as those used for arterial or central venous catheters or for extracorporeal membrane oxygenation (ECMO). During ECMO, a machine is used to pump the patient’s blood outside of the body to oxygenate it. It is used to treat people with profound heart or lung failure. Tubes often remain inserted for weeks or months, and bleeding at the sites of insertion can lead to infection.
The researchers have received funding from the MIT Deshpande Center to help them work toward commercializing their glue, which they hope to do after performing additional preclinical studies in animal models. The research was also funded by the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, and the Zoll Foundation.
Engineers at MIT and Harvard University have designed a small tabletop device that can detect SARS-CoV-2 from a saliva sample in about an hour. In a new study, they showed that the diagnostic is just as accurate as the PCR tests now used.
The device can also be used to detect specific viral mutations linked to some of the SARS-CoV-2 variants that are now circulating. This result can also be obtained within an hour, potentially making it much easier to track different variants of the virus, especially in regions that don’t have access to genetic sequencing facilities.
“We demonstrated that our platform can be programmed to detect new variants that emerge, and that we could repurpose it quite quickly,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering. “In this study, we targeted the U.K., South African, and Brazilian variants, but you could readily adapt the diagnostic platform to address the Delta variant and other ones that are emerging.”
The new diagnostic, which relies on CRISPR technology, can be assembled for about $15, but those costs could come down significantly if the devices were produced at large scale, the researchers say.
Collins is the senior author of the new study, which appears today in Science Advances. The paper’s lead authors are Helena de Puig, a postdoc at Harvard University’s Wyss Institute for Biologically Inspired Engineering; Rose Lee, an instructor in pediatrics at Boston Children’s Hospital and Beth Israel Deaconess Medical Center and a visiting fellow at the Wyss Institute; Devora Najjar, a graduate student in MIT’s Media Lab; and Xiao Tan, a clinical fellow at the Wyss Institute and an instructor in gastroenterology at Massachusetts General Hospital.
A self-contained diagnostic
The new diagnostic is based on SHERLOCK, a CRISPR-based tool that Collins and others first reported in 2017. Components of the system include an RNA guide strand that allows detection of specific target RNA sequences, and Cas enzymes that cleave those sequences and produce a fluorescent signal. All of these molecular components can be freeze-dried for long-term storage and reactivated upon exposure to water.
Last year, Collins’ lab began working on adapting this technology to detect the SARS-CoV-2 virus, hoping that they could design a diagnostic device that could yield rapid results and be operated with little or no expertise. They also wanted it to work with saliva samples, making it even easier for users.
To achieve that, the researchers had to incorporate a critical pre-processing step that disables enzymes called salivary nucleases, which destroy nucleic acids such as RNA. Once the sample goes into the device, the nucleases are inactivated by heat and two chemical reagents. Then, viral RNA is extracted and concentrated by passing the saliva through a membrane.
“That membrane was key to collecting the nucleic acids and concentrating them so that we can get the sensitivity that we are showing with this diagnostic,” Lee says.
This RNA sample is then exposed to freeze-dried CRISPR/Cas components, which are activated by automated puncturing of sealed water packets within the device. The one-pot reaction amplifies the RNA sample and then detects the target RNA sequence, if present.
“Our goal was to create an entirely self-contained diagnostic that requires no other equipment,” Tan says. “Essentially the patient spits into this device, and then you push down a plunger and you get an answer an hour later.”
The researchers designed the device, which they call minimally instrumented SHERLOCK (miSHERLOCK), so that it can have up to four modules that each look for a different target RNA sequence. The original module contains RNA guide strands that detect any strain of SARS-CoV-2. Other modules are specific to mutations associated with some of the variants that have arisen in the past year, including B.1.1.7, P.1, and B.1.351.
The Delta variant was not yet widespread when the researchers performed this study, but because the system is already built, they say it should be straightforward to design a new module to detect that variant. The system could also be easily programmed to monitor for new mutations that could make the virus more infectious.
“If you want to do more of a broad epidemiological survey, you can design assays before a mutation of concern appears in a population, to monitor for potentially dangerous mutations in the spike protein,” Najjar says.
The researchers first tested their device with human saliva spiked with synthetic SARS-CoV-2 RNA sequences, and then with about 50 samples from patients who had tested positive for the virus. They found that the device was just as accurate as the gold standard PCR tests now used, which require nasal swabs and take more time and significantly more hardware and sample handling to yield results.
The device produces a fluorescent readout that can be seen with the naked eye, and the researchers also designed a smartphone app that can read the results and send them to public health departments for easier tracking.
The researchers believe their device could be produced at a cost as low as $2 to $3 per device. If approved by the FDA and manufactured at large scale, they envision that this kind of diagnostic could be useful either for people who want to be able to test at home, or in health care centers in areas without widespread access to PCR testing or genetic sequencing of SARS-CoV-2 variants.
“The ability to detect and track these variants is essential to effective public health, but unfortunately, variants are currently diagnosed only by nucleic acid sequencing at specialized epidemiological centers that are scarce even in resource-rich nations,” de Puig says.
The research was funded by the Wyss Institute; the Paul G. Allen Frontiers Group; the Harvard University Center for AIDS Research, which is supported by the National Institutes of Health; a Burroughs-Wellcome American Society of Tropical Medicine and Hygiene postdoctoral fellowship; an American Gastroenterological Association Takeda Pharmaceutical Research Scholar Award; and an MIT-TATA Center fellowship.
For the past 50 years, mechanical engineering students at MIT have convened on campus for a boisterous robot competition. Since the 1970s, when the late Professor Emeritus Woodie Flowers first challenged students to build a machine using a “kit of junk,” students in class 2.007 (Design and Manufacturing I) have designed and built their own robots to compete in the class’s final robot competition. For many students, the class and competition are a driving factor in their decision to enroll in MIT.
“Each year, students tell us that they came to MIT specifically to take 2.007 and participate in the mayhem of creating these robots and doing this fun competition,” says Amos Winter, associate professor of mechanical engineering and 2.007 co-lead instructor.
This was the case for Julianna Rodriguez, a rising senior studying mechanical engineering. “For me, 2.007 was the class I’ve been most looking forward to at MIT. It serves as a bridge between the technical classes I’ve taken and actually being able to build something tangible,” says Rodriguez.
As with many hands-on classes, in March 2020 the faculty and teaching staff of 2.007 had to scrap plans for in-person elements, including the iconic final robot competition. While the team quickly pivoted to a version of the class focused on computer-aided design (CAD) and analysis, many, including Winter, were left heartbroken that students weren’t able to build and compete with their robot as they had hoped.
“I really felt that students lost out on that physical connection to mechanical engineering that comes from hands-on work. They didn’t learn all those important lessons of building something in the real world, having it fail, then figuring out how to fix it,” says Winter.
Winter and 2.007 co-lead instructor, Sangbae Kim, professor of mechanical engineering, immediately started envisioning what 2.007 might look like in spring 2021.
“As an educator, it was such a great learning opportunity. The challenges forced us to think differently and be more creative,” adds Kim.
Knowing there would be many challenges associated with giving students the hands-on, confidence-building experience 2.007 usually provides, Winter led an effort to re-imagine the class and account for any eventuality. He secured resources from MIT’s Department of Mechanical Engineering, including hiring Antoni Soledad ’21, who participated in the Undergraduate Research Opportunity Program (UROP) under Winter’s guidance. Winter took the fall semester off from teaching so he could focus on developing an entirely new 2.007 curriculum that maintained the class’s core tenets, including how the team could pull off a live, head-to-head, remote robot competition.
A 130-pound delivery
The team developed a list of “must haves” when brainstorming how to re-envision 2.007. First and foremost was retaining the confidence-building element of students coming up with their own design and building it with their own two hands, while at the same time learning core mechanical engineering principles.
“It was critical for this year's class for us to provide all the magic and deep engineering learning that comes from our normal hands-on experience, but do it in a remote environment,” adds Winter.
Central to all the teaching staff’s goals was equity. With some students likely participating fully remotely, the teaching team had to come up with solutions that were fair to everyone, whether they lived in a dorm, apartment, or across the country in their family’s home.
The team’s solution was to send a kit of materials to all 130 students in the class. The first generation of the kit was developed last summer by Soledad. By fall, Winter and Soledad were joined by teaching assistant Georgia Van de Zande and four other UROPs to continue iterating the kit.
“Because the students couldn’t come to the Pappalardo lab, we decided to send the Pappalardo lab to them,” says Van de Zande.
The team put a great deal of thought into the size, weight, and composition of these “Pappalardo-in-a-Box” kits. The result was a 130-pound kit filled with tools and materials students could use to build their own robots from home throughout the semester. There was enough variety in the materials to ensure each student could come up with their own creative and unique design.
“It’s really exciting that these students are starting their mechanical engineering journey with a literal toolbox full of tools they will use throughout their careers,” adds Van de Zande.
The team also identified a fitting theme for this year’s competition: “Home Alone: Together.” The theme was a nod to the classic holiday movie and the fact that students would be designing, building, and competing primarily from their homes throughout the semester.
A hybrid semester
With the kits delivered to students, Winter, Kim, and their team focused on how to actually teach students core design and manufacturing principles, regardless of where they were located.
All formal lab exercises were done virtually. Students would often meet one-on-one with instructors and lab staff via Zoom. This remote setup actually increased the amount of individual attention each student received from staff.
“In so many instances, virtual has been better than our normal format,” says Winter. “We have really meaningful one-on-one meetings with the students during our lab sections, because there's no distractions like there would be in the physical lab.”
The class also featured an optional in-person element open to students who had access to campus. Those students would visit the Pappalardo lab in small groups to interact with lab staff. The Pappalardo shop staff set up socially distanced workstations where students could safely work in the lab and get staff feedback. These workstations featured the exact same tools and materials every student had at home. The staff made sure they were just as accessible via Zoom to the students who could not come to campus.
One new feature of this year’s class was “Bill’s Build Demos” — a series of videos spearheaded by Bill Cormier, project technician in the Pappalardo lab, and produced by the shop staff. These videos demonstrated common elements in robotic design and the nuances at play in fabrication and assembly. The demos proved to be crucial in deepening students’ understanding of mechanism design; this was reflected in the quality and reliability of the robots they produced for the final competition.
The individual attention and dedication the 2.007 teaching team showed throughout the semester had a tangible impact on students.
“In my honest opinion, the 2.007 staff has been one of the most nurturing and understanding faculty that I've ever encountered,” says Megan Ngo, a rising junior studying mechanical engineering. “I’ve learned so much: the physics behind making a part, how to start a project from start to finish and just how cool mechanical engineering can be in the real world.”
MIT’s first-ever virtual robot competition
Students put the skills they learned throughout the semester to the test. Armed with their unique robot design, their own game board, and a camera, students were ready to participate in the first known live, head-to-head, remote physical robot competition.
The settings for the competition varied. Ben Owen Block and his roommate who also took the class organized the layout of their dorm room around the competition. Senior Julianna Rodriguez built her robot from an apartment in West Roxbury, Massachusetts. John Malloy started the semester in Colorado before moving home to Florida, where he built his robot in his childhood bedroom.
Dressed as “Home Alone” villains Harry and Marv, Winter and Kim emceed the competition as students’ robots went head-to-head in a series of elimination rounds. While each robot competed from a different location, the two competitors per round ran at the same time with their video feeds mixed into a live webcast. Robots performed tasks on a game board, modeled after the booby-trapped McCallister house from the movie. The robot that earned the most points would survive onto the next round.
In the end, Jordan Ambrosio’s robot emerged as the champion. But according to Winter, everyone was a winner this year.
“To come out of this semester, see what the students accomplished, and know that the educational experience of the class has been retained, it feels triumphant,” says Winter.
For Kim, a key improvement this year was the easy access students had to their robots. Rather than only work on their robots in the lab, students were able to tinker with their robots in their spare time.
“We found that students could manage their time much better when they had their tools and materials right in their rooms,” says Kim.
While the 2022 robot competition will hopefully be back in the Johnson Ice Rink, the 2.007 teaching team plans on incorporating many of the positive changes made in reaction to the pandemic to future semesters.
Gerald N. Wogan, the Underwood Prescott Professor of Biological Engineering, Chemistry, and Toxicology emeritus at MIT, passed away after a long illness on July 16 at the age of 91.
"Jerry" Wogan was a pioneering scientist who isolated, characterized, and established the mechanisms of action of many environmental toxins of great relevance to global public health. His leadership on aflatoxin research, a toxin that impacts the lives of billions of people, is a paradigm for environmental toxicology. His work ranged from basic mechanistic studies at the cell level to the development of animal models of disease, the study of disease patterns in populations, and, ultimately, the development of agents that induce biochemical pathways that protect people from toxin-induced disease.
During his 60-year career, Wogan trained over 75 graduate students and postdocs, who themselves went on to become leaders in the environmental health field. Former student John D. Groopman PhD '79, who led environmental health sciences at Johns Hopkins University for 20 years, recalls: "While Jerry was a great scientific leader respected by his peers, it was his humanity and commitment to the translation of basic science to the public's good that is his lasting legacy to his students and their students in turn."
John Essigmann PhD '76, the past director of the MIT Center for Environmental Health Sciences and associate head of chemistry, says, "Jerry was always open to new ideas and had a gift for taking an idea and projecting its impact on the global stage. He encouraged us to think big and see the broader impact of our work."
Wogan was born in 1930 in the railroad town of Altoona, Pennsylvania. His father was a railroad worker, and Wogan decided to attend Juniata College in 1947 in part because his father had a company pass that allowed him to visit his son at school. Wogan worked his way through college as a truck driver and was a member of the Teamsters' Union. In 1951, Wogan moved on to graduate work at the University of Illinois at Urbana, where he studied physiology, biochemistry, and microbiology with eminent physiologist Robert E. Anderson, and met his future wife, Holly, a special education teacher who became a surrogate parent to generations of Wogan lab members. The two married in 1957, the year Wogan received his PhD.
After his doctoral work and a brief teaching job at Rutgers University, Wogan sat by chance on an airplane next to Institute Professor emeritus Nevin Scrimshaw, recruiting faculty for what has become the MIT Department of Biological Engineering (Course 20). Wogan so impressed Scrimshaw during that flight that he was recruited to the MIT faculty and eventually took over as department head from 1979 to 1987.
In early work, Wogan and his longtime collaborator, chemist George Büchi, isolated a fungal toxin called aflatoxin B1 from peanuts infested with a fungus, Aspergillus flavus. In a chemistry and public health milestone, they identified the structure of the toxin and established methods for measuring it in foods and other environmental samples. The translation of this basic research to international policy and regulation of a potent carcinogen was a unique achievement that has impacted how the U.S. Food and Drug Administration, Environmental Protection Agency, and International Agency for Research on Cancer (IARC) evaluate potential carcinogens. Based on this research, Wogan participated in Volume 1 of the IARC evaluation of potential human carcinogens in 1972. This IARC program has become the gold standard for cancer risk assessment.
Wogan then turned to Southeast Asia, where he suspected that aflatoxin might be responsible for an epidemic of liver cancer. With Thai collaborators, Wogan and his student Ronald Shank '59, PhD '65 established an unequivocal association between aflatoxin levels in the food supply and the incidence of liver cancer in Thailand. Later replicated in sub-Saharan Africa and other parts of Asia, this aspect of his work represents a milestone in epidemiology.
Back at MIT, Büchi made derivatives of aflatoxin, and Wogan established animal models, some of which are still used today as pivotal tests for the cancer-causing potential of environmental agents. Wogan's work on aflatoxin quickly expanded to other fungal and bacterial toxins, fossil fuel combustion products, toxic foodborne amines, and the important roles of infection inflammation as a cause and accelerant of cancer.
Regarding his work on persistent bacterial infections, collaborator Jim Fox comments: "Jerry's collaborative studies with MIT colleagues Peter Dedon, John Essigmann, Steven Tannenbaum '58, PhD '62, and myself, probing the critical role of reactive oxygen species in the pathophysiology of chronic inflammation and carcinogenesis are unique, and I believe, extremely important."
Collaborator Tannenbaum recalls, "Jerry Wogan invented the paradigm for discovering an environmental carcinogen, its metabolism into a DNA damaging agent, developing biomarkers for molecular epidemiology, and monoclonal antibodies for environmental surveillance. His team of graduate students led the way with his guidance and wound up with five faculty positions at top universities, where they continued to drive the field of cancer epidemiology." Tannenbaum, who took the lead in establishing the Wogan Lectureship at MIT and went on to make pathfinding contributions on the roles of nitric oxide in human health and disease, also wrote that Wogan helped him as an early-career scientist move into toxicology.
The impact of Wogan's work on aflatoxin was felt strongly across the globe, where up to 5 billion people are potentially exposed to the toxin each day. Mathuros Ruchirawat PhD '75, vice president for research at the Chulabhorn Research Institute in Bangkok, reflects on the impact Wogan's work had on global public health, research, and teaching in Southeast Asia: "His research has immense and long-lasting impacts on public health in Thailand; the increased public awareness of aflatoxins as a major risk factor for liver cancer has contributed to the prevention of this disease in the country."
William Suk, who directs the national Superfund Research Program and plays a pivotal role in U.S.-Thailand relations, recalls that Jerry's superb qualities as a scientist were complemented by his ability to mentor others: "I remember most his ability to provide sage advice to all."
Three of Wogan's past graduate students and two other MIT professors still teach at Bangkok every summer in a graduate degree program Wogan inspired to address capacity building in the developing world. Colleague Dedon says: "Jerry's vision of science for the public good had true global impact that was much broader than the details of his research."
Ram Sasisekharan, a co-founder of the Bangkok program, says: "Jerry was a true inspiration — focused on problems that need solutions, and was a bold take on complex global problems."
Many of Wogan's colleagues went off to apply their toxicology skills in the pharmaceutical arena.
Gerald McMahon, former president of Sugen and developer of several approved anticancer therapies, says: "Jerry's inspiration and enthusiasm to take a risk and pursue innovation was inspiring and served me well in my biotech career."
Another industry-based colleague, Alexander Wood, former executive director in the oncology department of Novartis Institutes for Biomedical Research and currently a senior lecturer in biological engineering at MIT, remembers Jerry as being "consistently engaged in a broad range of topics in the causation, prevention, and treatment of cancer, and cheerfully willing to offer sound advice and perspective."
Wogan was recognized by many honors. He was a member of the U.S. National Academy of Sciences (1977) and the National Academy of Medicine (1994). He received the Charles S. Mott Prize of the General Motors Cancer Research Foundation (2005), the Medal of Honor of the International Agency for Research on Cancer (2010), The Princess Chulabhorn Gold Medal (2012), the Princess Takamatsu Cancer Research Fund award (2001), the Society of Toxicology lifetime scholar award (2004), the Chemical Industries Institute of Toxicology Founders' award (1999), as well as distinguished alumnus awards from his alma maters, Juniata College (2010) and the University of Illinois (1995).
Wogan's wife of over 50 years, Holly, passed away in 2013. He is survived by his daughter Christine and her husband John; his son Eugene and his wife Vicky; three grandchildren; and two great-grandchildren.
Former students Essigmann, Groopman, and Robert Croy PhD '79 recently reminisced about the Wogan laboratory's many adventures, which reflected Wogan's belief that scientists should get out of the laboratory and experience the outside world. On one trip, the younger members of the Wogan-Büchi group crossed the 45-kilometer Pemigewasset Wilderness in New Hampshire's White Mountains on skis, despite five feet of snow, brutal terrain, and subfreezing temperatures. As was typical of his style, Wogan had chilled champagne waiting at the finish of this long journey. He taught his group that hard work and a task well done are sweeter if one celebrates it in style. They also learned that their education at MIT was a journey, and that such journeys are best taken with friends. As a testimonial to this strategy of research group management, it is striking that so many of the former Wogan research groups are still close friends today, connected by the common bond of their time in his laboratory.
Wogan was a frequent participant in the Aspen Cancer Conference, which the Wogan family has designated as a charity for people who wish to donate in his name.
Pancreatic cancer, which affects about 60,000 Americans every year, is one of the deadliest forms of cancer. After diagnosis, fewer than 10 percent of patients survive for five years.
While some chemotherapies are initially effective, pancreatic tumors often become resistant to them. The disease has also proven difficult to treat with newer approaches such as immunotherapy. However, a team of MIT researchers has now developed an immunotherapy strategy and shown that it can eliminate pancreatic tumors in mice.
The new therapy, which is a combination of three drugs that help boost the body’s own immune defenses against tumors, is expected to enter clinical trials later this year.
“We don’t have a lot of good options for treating pancreatic cancer. It’s a devastating disease clinically,” says William Freed-Pastor, a senior postdoc at MIT’s Koch Institute for Integrative Cancer Research. “If this approach led to durable responses in patients, it would make a big impact in at least a subset of patients’ lives, but we need to see how it will actually perform in trials.”
Freed-Pastor, who is also a medical oncologist at Dana-Farber Cancer Institute, is the lead author of the new study, which appears today in Cancer Cell. Tyler Jacks, the David H. Koch Professor of Biology and a member of the Koch Institute, is the paper’s senior author.
The body’s immune system contains T cells that can recognize and destroy cells that express cancerous proteins, but most tumors create a highly immunosuppressive environment that disables these T cells, helping the tumor to survive.
Immune checkpoint therapy (the most common form of immunotherapy currently being used clinically) works by removing the brakes on these T cells, rejuvenating them so they can destroy tumors. One class of immunotherapy drug that has shown success in treating many types of cancer targets the interactions between PD-L1, a cancer-linked protein that turns off T cells, and PD-1, the T cell protein that PD-L1 binds to. Drugs that block PD-L1 or PD-1, also called checkpoint inhibitors, have been approved to treat cancers such as melanoma and lung cancer, but they have very little effect on pancreatic tumors.
Some researchers had hypothesized that this failure could be due to the possibility that pancreatic tumors don’t express as many cancerous proteins, known as neoantigens. This would give T cells fewer targets to attack, so that even when T cells were stimulated by checkpoint inhibitors, they wouldn’t be able to identify and destroy tumor cells.
However, some recent studies had shown, and the new MIT study confirmed, that many pancreatic tumors do in fact express cancer-specific neoantigens. This finding led the researchers to suspect that perhaps a different type of brake, other than the PD-1/PD-L1 system, was disabling T cells in pancreatic cancer patients.
In a study using mouse models of pancreatic cancer, the researchers found that in fact, PD-L1 is not highly expressed on pancreatic cancer cells. Instead, most pancreatic cancer cells express a protein called CD155, which activates a receptor on T cells known as TIGIT.
When TIGIT is activated, the T cells enter a state known as “T cell exhaustion,” in which they are unable to mount an attack on pancreatic tumor cells. In an analysis of tumors removed from pancreatic cancer patients, the researchers observed TIGIT expression and T cell exhaustion from about 60 percent of patients, and they also found high levels of CD155 on tumor cells from patients.
“The CD155/TIGIT axis functions in a very similar way to the more established PD-L1/PD-1 axis. TIGIT is expressed on T cells and serves as a brake to those T cells,” Freed-Pastor says. “When a TIGIT-positive T cell encounters any cell expressing high levels of CD155, it can essentially shut that T cell down.”
The researchers then set out to see if they could use this knowledge to rejuvenate exhausted T cells and stimulate them to attack pancreatic tumor cells. They tested a variety of combinations of experimental drugs that inhibit PD-1 and TIGIT, along with another type of drug called a CD40 agonist antibody.
CD40 agonist antibodies, some of which are currently being clinically evaluated to treat pancreatic cancer, are drugs that activate T cells and drive them into tumors. In tests in mice, the MIT team found that drugs against PD-1 had little effect on their own, as has previously been shown for pancreatic cancer. They also found that a CD40 agonist antibody combined with either a PD-1 inhibitor or a TIGIT inhibitor was able to halt tumor growth in some animals, but did not substantially shrink tumors.
However, when they combined CD40 agonist antibodies with both a PD-1 inhibitor and a TIGIT inhibitor, they found a dramatic effect. Pancreatic tumors shrank in about half of the animals given this treatment, and in 25 percent of the mice, the tumors disappeared completely. Furthermore, the tumors did not regrow after the treatment was stopped. “We were obviously quite excited about that,” Freed-Pastor says.
Working with the Lustgarten Foundation for Pancreatic Cancer Research, which helped to fund this study, the MIT team sought out two pharmaceutical companies who between them have a PD-1 inhibitor, TIGIT inhibitor, and CD40 agonist antibody in development. None of these drugs are FDA-approved yet, but they have each reached phase 2 clinical trials. A clinical trial on the triple combination is expected to begin later this year.
“This work uses highly sophisticated, genetically engineered mouse models to investigate the details of immune suppression in pancreas cancer, and the results have pointed to potential new therapies for this devastating disease,” Jacks says. “We are pushing as quickly as possible to test these therapies in patients and are grateful for the Lustgarten Foundation and Stand Up to Cancer for their help in supporting the research.”
Alongside the clinical trial, the MIT team plans to analyze which types of pancreatic tumors might respond best to this drug combination. They are also doing further animal studies to see if they can boost the treatment’s effectiveness beyond the 50 percent that they saw in this study.
In addition to the Lustgarten Foundation, the research was funded by Stand Up To Cancer, the Howard Hughes Medical Institute, Dana-Farber/Harvard Cancer Center, the Damon Runyon Cancer Research Foundation, and the National Institutes of Health.
Biological engineers at MIT have devised a new way to efficiently edit bacterial genomes and program memories into bacterial cells by rewriting their DNA. Using this approach, various forms of spatial and temporal information can be permanently stored for generations and retrieved by sequencing the cells’ DNA.
The new DNA writing technique, which the researchers call HiSCRIBE, is much more efficient than previously developed systems for editing DNA in bacteria, which had a success rate of only about 1 in 10,000 cells per generation. In a new study, the researchers demonstrated that this approach could be used for storing memory of cellular interactions or spatial location.
This technique could also make it possible to selectively edit, activate, or silence genes in certain species of bacteria living in a natural community such as the human microbiome, the researchers say.
“With this new DNA writing system, we can precisely and efficiently edit bacterial genomes without the need for any form of selection, within complex bacterial ecosystems,” says Fahim Farzadfard, a former MIT postdoc and the lead author of the paper. “This enables us to perform genome editing and DNA writing outside of laboratory settings, whether to engineer bacteria, optimize traits of interest in situ, or study evolutionary dynamics and interactions in the bacterial populations.”
Timothy Lu, an MIT associate professor of electrical engineering and computer science and of biological engineering, is the senior author of the study, which appears today in Cell Systems. Nava Gharaei, a former graduate student at Harvard University, and Robert Citorik, a former MIT graduate student, are also authors of the study.
Genome writing and recording memories
For several years, Lu’s lab has been working on ways to use DNA to store information such as memory of cellular events. In 2014, he and Farzadfard developed a way to employ bacteria as a “genomic tape recorder,” engineering E. coli to store long-term memories of events such as a chemical exposure.
To achieve that, the researchers engineered the cells to produce a reverse transcriptase enzyme called retron, which produces a single-stranded DNA (ssDNA) when expressed in the cells, and a recombinase enzyme, which can insert (“write”) a specific sequence of single-stranded DNA into a targeted site in the genome. This DNA is produced only when activated by the presence of a predetermined molecule or another type of input, such as light. After the DNA is produced, the recombinase inserts the DNA into a preprogrammed site, which can be anywhere in the genome.
That technique, which the researchers called SCRIBE, had a relatively low writing efficiency. In each generation, out of 10,000 E. coli cells, only one would acquire the new DNA that the researchers tried to incorporate into the cells. This is in part because the E. coli have cellular mechanisms that prevent single-stranded DNA from being accumulated and integrated into their genomes.
In the new study, the researchers tried to boost the efficiency of the process by eliminating some of E. coli’s defense mechanisms against single-stranded DNA. First, they disabled enzymes called exonucleases, which break down single-stranded DNA. They also knocked out genes involved in a system called mismatch repair, which normally prevents integration of single-stranded DNA into the genome.
With those modifications, the researchers were able to achieve near-universal incorporation of the genetic changes that they tried to introduce, creating an unparalleled and efficient way for editing bacterial genomes without the need for selection.
“Because of that improvement, we were able to do some applications that we were not able to do with the previous generation of SCRIBE or with other DNA writing technologies,” Farzadfard says.
In their 2014 study, the researchers showed that they could use SCRIBE to record the duration and intensity of exposure to a specific molecule. With their new HiSCRIBE system, they can trace those kinds of exposures as well as additional types of events, such as interactions between cells.
As one example, the researchers showed that they could track a process called bacterial conjugation, during which bacteria exchange pieces of DNA. By integrating a DNA “barcode” into each cell’s genome, which can then be exchanged with other cells, the researchers can determine which cells have interacted with each other by sequencing their DNA to see which barcodes they carry.
This kind of mapping could help researchers study how bacteria communicate with each other within aggregates such as biofilms. If a similar approach could be deployed in mammalian cells, it could someday be used to map interactions between other types of cells such as neurons, Farzadfard says. Viruses that can cross neural synapses could be programmed to carry DNA barcodes that researchers could use to trace connections between neurons, offering a new way to help map the brain’s connectome.
“We are using DNA as the mechanism to record spatial information about the interaction of bacterial cells, and maybe in the future, neurons that have been tagged,” Farzadfard says.
The researchers also showed that they could use this technique to specifically edit the genome of one species of bacteria within a community of many species. In this case, they introduced the gene for an enzyme that breaks down galactose into E. coli cells growing in culture with several other species of bacteria.
This kind of species-selective editing could offer a novel way to make antibiotic-resistant bacteria more susceptible to existing drugs by silencing their resistance genes, the researchers say. However, such treatments would likely require several years more years of research to develop, they say.
The researchers also showed that they could use this technique to engineer a synthetic ecosystem made of bacteria and bacteriophages that can continuously rewrite certain segments of their genome and evolve autonomously with a rate higher than would be possible by natural evolution. In this case, they were able to optimize the cells’ ability to consume lactose consumption.
“This approach could be used for evolutionary engineering of cellular traits, or in experimental evolution studies by allowing you to replay the tape of evolution over and over,” Farzadfard says.
The research was funded by the National Institutes of Health, the Office of Naval Research, the National Science Foundation, the Defense Advanced Research Projects Agency, the MIT Center for Microbiome Informatics and Therapeutics, the NSF Expeditions in Computing Program Award, and the Schmidt Science Fellows Program.
When you pick up a balloon, the pressure to keep hold of it is different from what you would exert to grasp a jar. And now engineers at MIT and elsewhere have a way to precisely measure and map such subtleties of tactile dexterity.
The team has designed a new touch-sensing glove that can “feel” pressure and other tactile stimuli. The inside of the glove is threaded with a system of sensors that detects, measures, and maps small changes in pressure across the glove. The individual sensors are highly attuned and can pick up very weak vibrations across the skin, such as from a person’s pulse.
When subjects wore the glove while picking up a balloon versus a beaker, the sensors generated pressure maps specific to each task. Holding a balloon produced a relatively even pressure signal across the entire palm, while grasping a beaker created stronger pressure at the fingertips.
The researchers say the tactile glove could help to retrain motor function and coordination in people who have suffered a stroke or other fine motor condition. The glove might also be adapted to augment virtual reality and gaming experiences. The team envisions integrating the pressure sensors not only into tactile gloves but also into flexible adhesives to track pulse, blood pressure, and other vital signs more accurately than smart watches and other wearable monitors.
“The simplicity and reliability of our sensing structure holds great promise for a diversity of health care applications, such as pulse detection and recovering the sensory capability in patients with tactile dysfunction,” says Nicholas Fang, professor of mechanical engineering at MIT.
Fang and his collaborators detail their results in a study appearing today in Nature Communications. The study’s co-authors include Huifeng Du and Liu Wang at MIT, along with professor Chuanfei Guo’s group at the Southern University of of Science and Technology (SUSTech) in China.
Sensing with sweat
The glove’s pressure sensors are similar in principle to sensors that measure humidity. These sensors, found in HVAC systems, refrigerators, and weather stations, are designed as small capacitors, with two electrodes, or metal plates, sandwiching a rubbery “dielectric” material that shuttles electric charges between the two electrodes.
In humid conditions, the dielectric layer acts as a sponge to soak up charged ions from surrounding moisture. This addition of ions changes the capacitance, or amount of charge between the electrodes, in a way that can be quantified and converted to a measurement of humidity.
In recent years, researchers have adapted this capacitive sandwich structure for the design of thin, flexible pressure sensors. The idea is similar: When a sensor is squeezed, the balance of charges in its dielectric layer shifts, in a way that can be measured and converted to pressure. But the dielectric layer in most pressure sensors is relatively bulky, limiting their sensitivity.
For their new tactile sensors, the MIT and SUSTech team did away with the conventional dielectric layer in favor of a surprising ingredient: human sweat. As sweat naturally contains ions such as sodium and chloride, they reasoned that these ions could serve as dielectric stand-ins. Rather than a sandwich structure, they envisioned two thin, flat electrodes, placed on the skin to form a circuit with a certain capacitance. If pressure was applied to one “sensing” electrode, ions from the skin’s natural moisture would accumulate on the underside, and change the capacitance between both electrodes, by an amount that they could measure.
They found they could boost the sensing electrode’s sensitivity by covering its underside with a forest of tiny, bendy, conductive hairs. Each hair would serve as a microscopic extension of the main electrode, such that, if pressure were applied to, say, a corner of the electrode, the hairs in that specific region would bend in response, and accumulate ions from the skin, the degree and location of which could be precisely measured and mapped.
In their new study, the team fabricated thin, kernel-sized sensing electrodes lined with thousands of gold microscopic filaments, or “micropillars.” They demonstrated that they could accurately measure the degree to which groups of micropillars bent in response to various forces and pressures. When they placed a sensing electrode and a control electrode onto a volunteer’s fingertip, they found the structure was highly sensitive. The sensors were able to pick up subtle phases in the person’s pulse, such as different peaks in the same cycle. They could also keep up accurate pulse readings, even as the person wearing the sensors waved their hands as they walked across a room.
“Pulse is a mechanical vibration that can also cause deformation of the skin, which we can’t feel, but the pillars can pick up,” Fang says.
The researchers then applied the concepts of their new, micropillared pressure sensor to the design of a highly sensitive tactile glove. They started with a silk glove, which the team purchased off the shelf. To make pressure sensors, they cut out small squares from carbon cloth, a textile that is composed of many thin filaments similar to micropillars.
They turned each cloth square into a sensing electrode by spraying it with gold, a naturally conductive metal. They then glued the cloth electrodes to various parts of the glove’s inner lining, including the fingertips and palms, and threaded conductive fibers throughout the glove to connect each electrode to the glove’s wrist, where the researchers glued a control electrode.
Several volunteers took turns wearing the tactile glove and performing various tasks, including holding a balloon and gripping a glass beaker. The team collected readings from each sensor to create a pressure map across the glove during each task. The maps revealed distinct and detailed patterns of pressure generated during each task.
The team plans to use the glove to identify pressure patterns for other tasks, such as writing with a pen and handling other household objects. Ultimately, they envision such tactile aids could help patients with motor dysfunction to calibrate and strengthen their hand dexterity and grip.
“Some fine motor skills require not only knowing how to handle objects, but also how much force should be exerted,” Fang says. “This glove could provide us more accurate measurements of gripping force for control groups versus patients recovering from stroke or other neurological conditions. This could increase our understanding, and enable control.”
This research was supported, in part, by the Joint Center for Mechanical Engineering Research and Education at MIT and SUSTech..
How do authoritarian regimes sustain their popularity? A novel study in China led by MIT scholars shows that anticorruption punishments meted out by government authorities receive significant support among citizens — who believe such actions demonstrate both competence and morally righteous leadership.
The findings help explain how authoritarian governments endure, not merely based on domination and fear, but as regimes generating positive public support over time.
“What we find is that not only does the punishment of corrupt officials increase the perception among citizens that there is a capable and competent government, but it also increases the belief that government authorities have moral commitments citizens care about,” says Lily Tsai, an MIT political scientist and co-author of a newly published paper detailing the study’s findings.
In the case of China, these anticorruption actions tend to consist of public punishments of lower-level local officials who have violated the law. It is not clear that such measures actually reduce corruption overall, but people are still influenced by public gestures involving crackdowns on malfeasance.
“It signals that there is someone in authority who is willing to create order and stability for the public,” Tsai notes.
The paper, “What makes anticorruption popular? Individual-level evidence from China,” has been published in advance online form in the Journal of Politics. The authors are Tsai, who is the Ford Professor of Political Science and MIT’s chair of the faculty; and Minh D. Trinh and Shiyao Liu, who are PhD candidates in political science at MIT.
The study consists of a sophisticated public-opinion experiment conducted in China using “conjoint analysis,” a method that identifies how much relative influence different factors have on people’s views.
The researchers essentially conducted three iterations of a detailed public-opinion survey. Nearly 2,400 total participants, in both rural and urban settings, were presented with hypothetical profiles of pairs of government leaders and asked to evaluate their performances based on a range of supposed attributes and achievements — including their anticorruption activities. In these scenarios, the exact attributes and activities of the hypothetical leaders varied randomly, allowing the researchers to separate out the importance of anticorruption measures in the minds of citizens.
Other things being equal, in these hypothetical scenarios, survey participants preferred officials making higher-profile anticorruption efforts, up to 25 percent more often than other officials. The survey’s respondents placed more weight on the economic stewardship provided by government officials, but rated anticorruption activities as being about equal in importance to welfare provision and administering elections fairly.
More significantly, Tsai says, the experiment finds that public interest in anticorruption gestures exists independently of anything else in a government official’s resume.
“Independent of how well officials do at economic development, or providing social welfare, or implementing elections, anticorruption punishment can still be a very useful tactic for authorities who are seeking to bolster their public support,” Tsai observes.
Indeed, Tsai adds, the results have a somewhat ominous implication along those lines: “These findings could indicate anticorruption punishment is a useful way of recession-proofing public support.”
Making punishment visible
The authors also introduced several modifications to the structure of the conjoint analysis to learn why people support visible anticorruption measures. Their study finds two distinct reasons behind this support. First, those measures signal that the officials taking action have the capacity to take decisive actions. Second, anticorruption actions also signal that the values of officials are aligned with ordinary citizens — even when the same officials do not, say, administer local elections well enough to give voters a strong voice in selecting leaders.
“At least in the Chinese context, in both urban populations and rural populations in China, citizens see officials who punish other, lower-level officials for corruption as being more moral,” Tsai says. “They [think anticorruption officials] have the “’right intentions.’”
Moreover, Tsai adds, anticorruption gestures seem effective even in lieu of evidence that corruption might be consequently reduced. At least in political terms, staging a high-profile anticorruption campaign is what matters, more than quelling corruption.
“It’s in the interest of rulers to invest in anticorruption punishments even if that punishment does not decrease corruption,” Tsai says. “People have no data about how much corruption there is in government. What they can see more clearly are the incidents of punishment of corruption.”
In historical terms, Tsai adds, the results fit “a longstanding tradition in China where the rulers position themselves as the allies of ordinary people,” despite restricting individual liberties in many ways. That said, Tsai thinks the results describe a political dynamic that could be found in many nation-states, in many varieties: People will back leaders who support symbolic public punishments, conveying a message that the traditional social order will remain intact.
“People are often willing to sacrifice a lot for a sense of certainty,” Tsai says.