MIT Latest News
When publication such as U.S. News and World Report roll out their annual university rankings, typically with MIT among the top schools listed, some may wonder where the data they’re based on actually comes from.
The source of that information is MIT Instituational Research, which collects and compiles data on many facets of the Institute, or, as Director Lydia Snover puts it, on MIT’s “people, money, and space.” The Institutional Research (IR) website is a wonderland of data that tells the story of MIT’s evolution over recent decades. There are surveys of faculty, graduate students, undergraduates — and even undergraduates’ parents. Users can also take a deep dive into the demographics of different subsets of the MIT community and peruse financial figures on research expenditures, tuition, and more.
Public universities have been providing this kind of information for decades to state and federal agencies that fund them. It’s unusual for a private university such as MIT to have such a robust IR operation and to share so much of its data publicly, but Snover has long been a leader in the field of IR at the national, and even international, level. She was recently awarded the John Stecklein Distinguished Member Award from the Association for Institutional Research, for advancing the field of institutional research through extraordinary scholarship, leadership, and service.
MIT News caught up with Snover to talk about IR at MIT, her philosophy about transparency, and why she’s a fan of the Institute’s data warehouse.
Q: What are the main types of data that your office collects, and what are they used for?
A: We bring together data from lots of different operational areas at MIT — including human resources, the registrar, admissions, and facilities, to name just a few — to simplify it in some ways and create metrics that can be used by departments, labs, and centers to help them meet their goals.
We complete all information requests for university rankings, guidebooks, and various consortiums. We also administer surveys for organizations like the Consortium for Financing of Higher Education as well as some of our other peer institutions. The majority of surveys we administer are just for the MIT community, or subsets of it. We administer over 100 surveys a year. We support the accreditation process and assist when asked with grant applications.
We provide reports for department heads in preparation for meetings with the Corporation’s visiting committees. We’ll put together a 10-year profile that includes department-level trends in staffing, retention, enrollment, sponsored research expenditures, how graduate students are being funded, things like that. We can compare those numbers within MIT and for a subset of metrics with other peer institutions.
People like to talk about making data-driven decisions, but we prefer the term “data-informed.” We collect data that help MIT’s senior officers make decisions about what’s best for the Institute.
A lot of the data we collect are available on our website, including our survey data. We have philosophy that if we ask people to fill out a survey, they’re entitled to see the results!
Q: How has your mandate changed in the last 20 years, and what do you see in the office’s future?
A: Institutional Research was established in 1986 and initially we focused primarily on physical planning. Over the next 15 years we began administering surveys, responding on behalf of the Institute to external data requests, and providing briefing materials. In 2000 we moved to the Office of the Provost, and our portfolio has continued to evolve and grow, both in terms of the services we provide to MIT leadership and the greater MIT community, and our involvement in sharing data with other universities. The staff has evolved as well to include analysts, programmers, experts in survey design, data visualization, database design, statistics, and qualitative analysis. MIT IR has an extraordinarily gifted staff.
Nationally, large institutional research offices were needed mostly by public institutions to respond to state legislatures. Private universities and colleges have slowly built up their capacity, in large part to provide internal analysis. In 1988, MIT joined the Association of American Universities Data Exchange (AAUDE), a consortium which facilitates data sharing with other AAU universities on things like the composition of faculty at the department level. The number of private AAU universities participating in the AAU Data Exchange has gone from a handful in 1990 to all 27 since I and others began encouraging our colleagues to become active.
Before MIT became involved, members were mailing each other this information on paper — you’d have file cabinets of paper! — so MIT first volunteered to provide an FTP server to facilitate electronic exchange of data. Now the AAU Data Exchange has a data warehouse, which has made the whole system very efficient.
One area were we focus a lot of attention, through our surveys and other data collections, is on what happens to our graduates: What percentage are going into industry? What are the companies that are hiring them? It used to be that all universities cared about was how many students go to graduate school, but MIT sends a lot of graduates to industry.
One new project is working with professors Susan Hockfield, Sangeeta Bhatia, and Nancy Hopkins and the Boston Biotech working group on some interesting issues in gender representation in biotechnology, looking at company leadership, issuance of patents, and other areas. The goal is to be able to compare MIT to national averages and deliberate on how to make positive changes in the ecosystem.
Q: MIT’s IR office is relatively big for a private university. Why is that?
A: The scope of work for MIT’s Institutional Research Office is unusual because we’re involved in many projects that are important to MIT but not typical for institutional research. For example, at MIT we work with MITx data and sponsored research trends.
We’re very lucky and unusual because at MIT we have centralized data systems but local decision making. The fact that we have only one registrar, for example, and centralized accounting systems makes it much easier for my office to pull data together and analyze it.
I can’t emphasize enough how important the MIT data warehouse is — to everyone at MIT, not just to us. If you’re an analyst in an office like ours, you’d have to learn query languages for all the different databases. You would also spend a large proportion of your time compiling and cleaning data. But IS&T set up this system so that data could feed into one central warehouse, and you don’t need special programming skills to pull information out of it. The MIT data warehouse has been the envy of most of our peers.
MIT is the best place in the world to do institutional research because we have faculty who aren’t afraid of the data, even if they show there’s room for improvement. There’s an engineering mentality that permeates MIT. If we find we’re different from our peers in a way that we need to fix, then we identify that and fix it. You never think you’re the best because there’s always something to improve on.
In the summer of 2018, a team led by MIT researchers reported in the journal Nature that they had successfully embedded electronic devices into fibers that could be used in fabrics or composite products like clothing, airplane wings, or even wound dressings. The advance could allow fabrics or composites to sense their environment, communicate, store and convert energy, and more.
Research breakthroughs typically take years to make it into final products — if they reach that point at all. This particular research, however, is following a dramatically different path.
By the time the unique fiber advance was unveiled last summer, members of Advanced Functional Fabrics of America (AFFOA), a not-for-profit near MIT, had already developed ways to increase the throughput and overall reliability of the process. And, staff at Inman Mills in South Carolina had established a method to weave the advanced fibers using a conventional, industrial manufacturing-scale loom to create fabrics that can use light to both broadcast and receive information.
Today, less than a year after the technology was first introduced to the world, around a quarter of a million semiconducting devices have been embedded in fibers using the patented technology, and companies like New Balance, VF, Bose, and 3M are seeking ways to use the technology in their products.
“AFFOA is helping cutting-edge basic research to reach market-ready scale at unprecedented velocity,” says Yoel Fink, CEO of AFFOA and a professor of materials science and electrical engineering at MIT. “Chip-containing fibers, which were just recently a university research project, are now being produced at an annual rate of half a million meters. This scale allows AFFOA to engage dozens of companies and accelerate product and process development across multiple markets simultaneously.”
Fink says that AFFOA’s work is unleashing a “Moore’s Law for fibers,” wherein the basic functions of fibers will grow exponentially in the coming years, allowing companies to develop value-added fabric and composite products and services. “Chip-containing fibers present a real prospect for fabrics to be the next frontier in computation and AI,” he says.
Sowing the seeds of fabric innovation
In 2015, MIT President L. Rafael Reif called for the formation of public-private partnerships he named “innovation orchards,” to reduce the time it takes new ideas to make an impact on society. Specifically, he wanted to make tangible innovations as easy to deploy and test as digital ones.
Later that year, AFFOA was formed by MIT and other key partners to accept Reif’s challenge and take advantage of recent breakthroughs in fiber materials and textile-manufacturing processes.
“The gap between where research ends and product begins is the so-called valley of death,” Fink says. “President Reif introduced the concept of orchards of innovation as a way for us, as a university, to organize these collaboration centers for technology to help bridge basic research to the market entry point.”
In 2016, AFFOA was selected by the federal government to serve as the new Revolutionary Fibers and Textiles Manufacturing Innovation Institute, receiving more than $75 million in government funding and nearly $250 million in private investments to support U.S. based, high-volume production of these new technologies.
Since then, speed has been paramount at AFFOA. As MIT and other research entities have advanced the field, AFFOA has helped facilitate pilot production of these sophisticated textiles and fabrics so companies can engage consumers with small batches of advanced fabric products, or prototypes, in a manner similar to how software companies roll out minimally viable products to quickly gather feedback from customers and consumers.
Fabrics at the speed of software
A key element in the success of software has been the ability to rapidly prototype and test products with the target customer. Tangible products, on the other hand, experience a much more difficult path to consumers, and fabrics are no exception. The reason for this is the absence of efficient prototyping mechansims at scale.
To allow fabric products to move faster to market, AFFOA has created a national prototyping network with dozens of domestic manufacturers and universities, allowing it to rapidly test advanced fabric products directly with customers.
The prototyping network is currently actively pursuing more than 30 projects, called MicroAwards, with industry and academia designed to incorporate the latest advances in fibers and textiles into mass manufacturing processes. Industry and academic participants are required to operate within short timeframes, typically 90 days or less and divided into two week sprints.
For instance, Teufelberger, a manufacturer of ropes located in Fall River, Massachusetts, is working with AFFOA on integrating advanced fibers into their braided ropes. The ropes can help climbers or divers communicate or store information on how the rope was used.
At the end of May, AFFOA will roll out at the Augmented Reality Expo a fabric augmented-reality experience that will allow conference attendees to connect with each other using AFFOA’s fabric LOOks system.
The fabric of entrepreneurship and education
AFFOA has also partnered with schools such as the Fashion Institute of Technology in New York and the Greater Lawrence Technical School, where students are learning how to design and make an advanced chip-containing fibers, as well as other skills related to manufacturing advanced functional fabrics and the products that will emerge from them.
Additionally, over 30 entrepreneurs have been working on establishing startups around advanced fabrics as part of the advanced fabric entrepreneurship program managed by AFFOA in collaboration with the Venture Mentoring Service at MIT.
AFFOA is currently evaluating the prospects of raising an investment fund dedicated to funding startups in the advanced fabric sector.
For Fink, AFFOA’s work is about turning fabric, an ancient yet largely unchanged material, into a new platform for innovation.
“Fabrics occupy a very significant real estate, the surface of our bodies, and yet we’re not doing much with that real estate — it’s underdeveloped,” Fink says. “AFFOA is setting the stage for a fabric revolution by allowing these ancient forms to become high tech and deliver value-add services in the years ahead.”
From the perspective of a chemical engineer, particulate gels are the stuff of modern life. These materials, in which small pieces of one kind of substance are suspended or distributed within another, can be found in such construction products as concrete, inks, and paints; foods like cheese, yogurt, and ice cream; and in a range of cosmetic- and health-related staples including shampoo, toothpaste, and vaccines. In sum, says James W. Swan, the Texaco-Mangelsdorf Career Development Professor in Chemical Engineering, "a massive variety of real-world, everyday things bear particles."
Many of these ubiquitous gels, creams, emulsions, and compounds evolved through "trial-and-error experimentation," says Swan. Engineering such materials often proves to be a prolonged and sometimes inefficient hit-or-miss process.
But now Swan and collaborators from other universities have devised a framework that will help guide the design of new materials involving such particulate compounds. An account of their research, which began in 2015, appears in the May 20 issue of Nature Communications. The experimental studies were conducted at the University of Delaware and the University of Michigan. Swan and MIT doctoral student Zsigmond Varga (now a process development engineer at ExxonMobil) were responsible for the computational side of the work.
Combining laboratory experiments and computational simulations, the team has analyzed the formation of networks of particles that determine the microstructure of a wide range of materials. This enables the researchers to predict the macroscopic mechanical properties conferred by these networks. Their new approach will make it possible to "seek out new materials or engineer systems better optimized for tasks in terms of properties or costs, for a lot of different technologies," says Swan.
The key is elasticity
Predicting the elasticity of materials — how soft or hard they will be — has proved a longstanding challenge in the area of chemical engineering that deals with particle networks. This property, called the elastic modulus, is central to designing new things. But, says Swan, "We have had no equations to make predictions, and these equations would be really helpful in creating new formulations."
To develop such useful mathematical models, though, the science required a platform of informative experimental data that could fill in "fundamental gaps in understanding how networks of particles are built, and where mechanical properties come from," explains Swan. So his colleagues set out to conduct these essential studies.
In order to investigate the formation and properties of particle networks, Swan's collaborators devised a set of nifty methods. They created particles that were effectively translucent except when illuminated by a special kind of fluorescent light. This permitted the researchers to image particle networks in real time under a microscope. They also fashioned the particles so they could be manipulated by laser tweezer — a device that can exert forces on small particles. With their novel tools, researchers could directly measure the force each particle exerted on another. "My colleagues could observe the particles' motion, how they stuck together, and gained insight into the nature of the particle network's elasticity," says Swan.
These very precise experimental techniques, says Swan, "made it possible for our lab to build simulations that quantitatively reproduced the experimental results." But Swan also faced challenges: The distribution of particles in a network has often seemed disordered to scientists — as if a distracted builder had laid bricks haphazardly in mortar. "It's as if there are no discernible patterns," says Swan.
His lab's unique solution was to apply methods from graph theory, a field of mathematics often used to understand computer or social networks, to deconstruct and model the discrete elements and connections among particles in chemical networks.
By employing "the same tools useful for finding cliques in social networks — tightly connected groups loosely connected to each other — we are able to discriminate the bricks from the mortar in these particle networks, see how the particles are positioned relative to each other, and infer the strength or weakness — the elasticity — of the overall network."
From networks to new materials
Two scientists not involved in the studies behind the journal article find the work extremely promising.
"This task of discerning connections within a huge collection of particles has historically been extremely puzzling and challenging," says Thibaut Divoux, a research scientist with a joint appointment at MIT and France's National Center for Scientific Research. "The use of graph theory to identify clusters, verify experimental results, and to predict properties is really elegant, as well as groundbreaking," he says.
Randy H. Ewoldt SM '06, PhD '09, an associate professor of mechanical science and engineering at University of Illinois at Urbana-Champaign, found "the agreement between simulation and experiment impressive." He believes the work of Swan and his collaborators "represents excellent progress toward the goal of predicting and engineering properties of these materials."
There might be myriad applications of this research to improve existing particle-based products and to formulate new ones, suggests Swan. Take the abundant substance of concrete. Since "we now know how changing different chemical or physical factors in these networks will affect properties," he says, concrete's particle network — comprised of aggregate rock and cement — could be engineered for enhanced strength while using less material.
There is potential as well for advancing grid-scale energy storage, where particle networks are deployed as electrodes for flow batteries capturing energy from wind or solar power; and for developing new pharmaceuticals such as protein-based solutions used in drug delivery.
And some scientists hungrily envision applications in food manufacturing: "Imagine designing yogurt, cheese, or other dairy colloidal products to determine the mouthfeel, so that the product crumbles or breaks just the way you want," says Divoux. "This work gives us an experimental key we can use to control the microstructure properties of a material to tweak its macrostructure."
Funding for the experimental portion of this research came from the International Fine Particle Research Institute, and funding for the computational work was made possible by the American Chemical Society Petroleum Research Fund, and by a National Science Foundation Young Investigator Award.
As the size, complexity, and interconnection of societal systems increase, these systems generate huge amounts of data that can lead to new insights. These data create an opportunity for policymakers aiming to address major societal challenges, provided they have the tools to understand the data and use them for better decision-making.
At a unique MIT event convened by MIT’s Technology and Policy Program (TPP), a part of the Institute for Data, Systems, and Society (IDSS), interdisciplinary teams analyzed data sets and created policy proposals to real challenges submitted by academic groups and local government. The student-run MIT Policy Hackathon gathered data analysts, engineers, scientists, domain experts, and policy specialists to look for creative, data-driven solutions addressing major societal issues.
“One of the goals of the hackathon is to show others the power of using technology and policy together to craft solutions to important societal problems,” says Becca Browder, a Policy Hackathon organizer and student in TPP. “I think the event achieved that goal.”
The hackathon teams worked over 48 hours on one of five challenges in the areas of climate, health, artificial intelligence and ethics, urban planning, and the future of work. The hackathon ended in a proposal pitch session to a panel of judges from academia, government, and industry.
In the climate challenge, sponsored by the City of Boston, teams examined precipitation data to help the city prepare for increased flooding due to climate change.
“The city is taking climate change very seriously,” says Charlie Jewell, director of planning and sustainability for the Boston Water and Sewer Commission. After mentoring and judging the climate challenge, Jewell said there was a “good give-and-take” to be had from partnering with local universities. “The organizers and participants all did such an unbelievable job. I got some great ideas from participants for looking at our rainfall data in different ways. They also showed what kind of data they needed and how we could get it.”
Hackathon participant Minghao Qiu, a student at IDSS in the Social and Engineering Systems doctoral program, also found the opportunity to work directly with stakeholders useful. “The interaction with the challenge sponsor helped me think about how to better communicate my research findings with policymakers in the future,” says Qiu, whose team GAMMDRYL also included TPP alumnus Arthur Yip SM ’14. GAMMDRYL won the climate challenge with a proposal recommending the city team up with a citizen science initiative that crowdsources rainfall data.
“I learned that it is often useful to help decision-makers to understand their data better,” Qiu says.
The overall winner of the hackathon was a team called Dream ER, who worked on the health challenge. This challenge, sponsored by Harvard School of Public Health graduate student Ahmed Mahmoud Abdelfattah, asked for ways to optimize emergency rooms by studying patient traffic and outcome data.
“By using creative visualization techniques, they simulated how their policy suggestions can result in an overall improvement in service efficiency,” Abdelfattah says of the winning team’s proposal. “Their proposal was also quite generalizable, meaning that those same methods they used to examine the data and simulate changes can be applied to other hospitals and other care settings.”
For the AI and ethics challenge, sponsored by the Berkman Klein Center for Internet and Society at Harvard University, teams worked to develop a resource, such as a visualization tool, to help nontechnical policy advocates understand different definitions of "algorithmic fairness" — especially in the context of criminal justice risk-assessment tools. Participants had access to data shared by journalists who evaluated COMPAS, a widely-used recidivism risk scoring tool.
The urban planning challenge, sponsored by the City of Boston’s Department of Innovation and Technology, tasked participants with assessing the impact of AirBnB on neighborhood economies and Boston’s affordable housing crisis, using the city’s short-term rental data. The future of work challenge, posed by the MIT Initiative on the Digital Economy (IDE), asked for a broad exploration of the potential for machine learning to automate tasks. Using a data set of work activities put together by researchers at MIT and Carnegie Mellon University, this challenge asked for policy proposals that help predict and prepare for the impact of machine learning automation on industries and workers.
This was the third MIT Policy Hackathon: an inaugural hackathon was held in spring 2018, and another was organized for Boston Hubweek in fall 2018. Students hope to make it a fixture of the program. “IDSS and TPP work on how policy and society interact with science and technology, and how we can use data to enhance policy,” Browder says. “These are also main goals of the hackathon, so there is strong strategic alignment between the event and the host organizations.”
TPP director Noelle Selin agrees. “TPP and IDSS are educating scientists, engineers, and leaders who can use the tools of data science as well as speak the language of policy,” says Selin, a professor in IDSS and Earth, Atmospheric, and Planetary Sciences. “We need this type of interdisciplinary thinking to tackle the most pressing challenges facing society.”
Most of us know optical lenses as curved, transparent pieces of plastic or glass, designed to focus light for microscopes, spectacles, cameras, and more. For the most part, a lens’ curved shape has not changed much since it was invented many centuries ago.
In the last decade, however, engineers have created flat, ultrathin materials called “metasurfaces” that can perform tricks of light far beyond what traditional curved lenses can do. Engineers etch individual features, hundreds of times smaller than the width of a single human hair, onto these metasurfaces to create patterns that enable the surface as a whole to scatter light very precisely. But the challenge is to know exactly what pattern is needed to produce a desired optical effect.
That’s where MIT mathematicians have come up with a solution. In a study published this week in Optics Express, a team reports a new computational technique that quickly determines the ideal makeup and arrangement of millions of individual, microscopic features on a metasurface, to generate a flat lens that manipulates light in a specified way.
Previous work attacked the problem by limiting the possible patterns to combinations of predetermined shapes, such as circular holes with different radii, but this approach only explores a tiny fraction of the patterns that can potentially be made.
The new technique is the first to efficiently design completely arbitrary patterns for large-scale optical metasurfaces, measuring about 1 square centimeter — a relatively vast area, considering each individual feature is no more than 20 nanometers wide. Steven Johnson, professor of mathematics at MIT, says the computational technique can quickly map out patterns for a range of desired optical effects.
“Say you want a lens that works well for several different colors, or you want to take light and instead of focusing it to a spot, make a beam or some sort of hologram or optical trap,” Johnson says. “You can tell us what you want to do, and this technique can come up with the pattern that you should make.”
Johnson’s co-authors on the paper are lead author Zin Lin, Raphaël Pestourie, and Victor Liu.
A single metasurface is typically divided into tiny, nanometer-sized pixels. Each pixel can either be etched or left untouched. Those that are etched can be put together to form any number of different patterns.
To date, researchers have developed computer programs to search out any possible pixel pattern for small optical devices measuring tens of micrometers across. Such tiny, precise structures can be used to, for instance, trap and direct light in an ultrasmall laser. The programs that determine the exact patterns of these small devices do so by solving Maxwell’s equations — a set of fundamental equations that describe the scattering of light — based on every single pixel in a device, then tuning the pattern, pixel by pixel, until the structure produces the desired optical effect.
But Johnson says this pixel-by-pixel simulation task becomes nearly impossible for large-scale surfaces measuring millimeters or centimeters across. A computer would not only have to work with a much larger surface area, with orders of magnitude more pixels, but also would have to run multiple simulations of many possible pixel arrangements to eventually arrive at an optimal pattern.
“You have to simulate on a scale big enough to capture the whole structure, but small enough to capture fine details,” Johnson says. “The combination is really a huge computational problem if you attack it directly. If you threw the biggest supercomputer on Earth at it, and you had a lot of time, you could maybe simulate one of these patterns. But it would be a tour de force.”
From a randomly patterned metasurface, new technique quickly evolves an ideal pattern to produce a lens with desired optical effects. Credit: Zin Lin
An uphill search
Johnson’s team has now come up with a shortcut that efficiently simulates the desired pattern of pixels for large-scale metasurfaces. Instead of having to solve Maxwell’s equations for every single nanometer-sized pixel in a square centimeter of material, the researchers solved these equations for pixel “patches.”
The computer simulation they developed starts with a square centimeter of randomly etched, nanometer-sized pixels. They divided the surface into groups of pixels, or patches, and used Maxwell’s equations to predict how each patch scatters light. They then found a way to approximately “stitch” the patch solutions together, to determine how light scatters across the entire, randomly etched surface.
From this starting pattern, the researchers then adapted a mathematical technique known as topology optimization, to essentially tweak the pattern of each patch over many iterations, until the final, overall surface, or topology, scatters light in a preferred way.
Johnson likens the approach to attempting to find your way up a hill, blindfolded. To produce a desired optical effect, each pixel in a patch should have an optimal etched pattern that should be attained, that could be thought of metaphorically as a peak. Finding this peak, for every pixel in a patch, is considered a topology optimization problem.
“For each simulation, we’re finding which way to tweak each pixel,” Johnson says. “You then have a new structure which you can resimulate, and you keep doing this process, each time going uphill until you reach a peak, or optimized pattern.”
The team’s technique is able to identify an optimal pattern in just a few hours, compared with traditional pixel-by-pixel approaches which, if applied directly to large metasurfaces, would be virtually intractable.
Using their technique, the researchers quickly came up with optical patterns for several “metadevices,” or lenses with varying optical properties, including a solar concentrator that takes incoming light from any direction and focuses it to a single point, and an achromatic lens, which scatters light of different wavelengths, or colors, to the same point, with equal focus.
“If you have a lens in a camera, if it’s focused on you, it should be focused for all colors simultaneously,” Johnson says. “The red shouldn’t be in focus but the blue out of focus. So you have to come up with a pattern that scatters all the colors in the same way so they go into the same spot. And our technique is able to come up with a crazy pattern that does that.”
Going forward, the researchers are working with engineers, who can fabricate the intricate patterns that their technique maps out, to produce large metasurfaces, potentially for more precise cellphone lenses and other optical applications.
“These surfaces could be produced as sensors for cars that drive themselves, or augmented reality, where you need good optics,” Pestourie says. “This technique allows you to tackle much more challenging optical designs.”
This research was funded, in part, by the U. S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT .
An MIT student team took second place for its design of a multilevel greenhouse to be used on Mars in NASA’s 2019 Breakthrough, Innovative and Game-changing (BIG) Idea Challenge last month.
Each year, NASA holds the BIG Idea competition in its search for innovative and futuristic ideas. This year’s challenge invited universities across the United States to submit designs for a sustainable, cost-effective, and efficient method of supplying food to astronauts during future crewed explorations of Mars. Dartmouth College was awarded first place in this year’s closely contested challenge.
“This was definitely a full-team success,” says team leader Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). The team had contributions from 10 undergraduates and graduate students from across MIT departments. Support and assistance were provided by four architects and designers in Italy. This project was completely voluntary; all 14 contributors share a similar passion for space exploration and enjoyed working on the challenge in their spare time.
The MIT team dubbed its design “BEAVER” (Biosphere Engineered Architecture for Viable Extraterrestrial Residence). “We designed our greenhouse to provide 100 percent of the food requirements for four active astronauts every day for two years,” explains Hinterman.
The ecologists and agriculture specialists on the MIT team identified eight types of crops to provide the calories, protein, carbohydrates, and oils and fats that astronauts would need; these included potatoes, rice, wheat, oats, and peanuts. The flexible menu suggested substitutes, depending on astronauts’ specific dietary requirements.
“Most space systems are metallic and very robotic,” Hinterman says. “It was fun working on something involving plants.”
Parameters provided by NASA — a power budget, dimensions necessary for transporting by rocket, the capacity to provide adequate sustenance — drove the shape and the overall design of the greenhouse.
Last October, the team held an initial brainstorming session and pitched project ideas. The iterative process continued until they reached their final design: a cylindrical growing space 11.2 meters in diameter and 13.4 meters tall after deployment.
An innovative design
The greenhouse would be packaged inside a rocket bound for Mars and, after landing, a waiting robot would move it to its site. Programmed with folding mechanisms, it would then expand horizontally and vertically and begin forming an ice shield around its exterior to protect plants and humans from the intense radiation on the Martian surface.
Two years later, when Earth and Mars orbits were again in optimal alignment for launching and landing, a crew would arrive on Mars, where they would complete the greenhouse setup and begin growing crops. “About every two years, the crew would leave and a new crew of four would arrive and continue to use the greenhouse,” explains Hinterman.
To maximize space, BEAVER employs a large spiral that moves around a central core within the cylinder. Seedlings are planted at the top and flow down the spiral as they grow. By the time they reach the bottom, the plants are ready for harvesting, and the crew enters at the ground floor to reap the potatoes and peanuts and grains. The planting trays are then moved to the top of the spiral, and the process begins again.
“A lot of engineering went into the spiral,” says Hinterman. “Most of it is done without any moving parts or mechanical systems, which makes it ideal for space applications. You don’t want a lot of moving parts or things that can break.”
The human factor
“One of the big issues with sending humans into space is that they will be confined to seeing the same people every day for a couple of years,” Hinterman explains. “They’ll be living in an enclosed environment with very little personal space.”
The greenhouse provides a pleasant area to ensure astronauts’ psychological well-being. On the top floor, just above the spiral, a windowed “mental relaxation area” overlooks the greenery. The ice shield admits natural light, and the crew can lounge on couches and enjoy the view of the Mars landscape. And rather than running pipes from the water tank at the top level down to the crops, Hinterman and his team designed a cascading waterfall at the area’s periphery, further adding to the ambiance.
Sophomore Sheila Baber, an Earth, atmospheric, and planetary sciences (EAPS) major and the team’s ecology lead, was eager to take part in the project. “My grandmother used to farm in the mountains in Korea, and I remember going there and picking the crops,” she says. “Coming to MIT, I felt like I was distanced from my roots. I am interested in life sciences and physics and all things space, and this gave me the opportunity to combine all those.”
Her work on BEAVER led to Baber’s award of one of five NASA internships at Langley Research Center in Hampton, Virginia this summer. She expects to continue exploration of the greenhouse project and its applications on Earth, such as in urban settings where space for growing food is constrained.
“Some of the agricultural decisions that we made about hydroponics and aquaponics could potentially be used in environments on Earth to raise food,” she says.
“The MIT team was great to work with,” says Hinterman. “They were very enthusiastic and hardworking, and we came up with a great design as a result.”
In addition to Baber and Hinterman, team members included Siranush Babakhanova (Physics), Joe Kusters (AeroAstro), Hans Nowak (Leaders for Global Operations), Tajana Schneiderman (EAPS), Sam Seaman (Architecture), Tommy Smith (System Design and Management), Natasha Stamler (Mechanical Engineering and Urban Studies and Planning), and Zhuchang Zhan (EAPS). Assistance was provided by Italian designers and architects Jana Lukic, Fabio Maffia, Aldo Moccia, and Samuele Sciarretta. The team’s advisors were Jeff Hoffman, Sara Seager, Matt Silver, Vladimir Aerapetian, Valentina Sumini, and George Lordos.
The BIG Idea Challenge is sponsored by NASA’s Space Technology Mission Directorate’s Game Changing Development program and managed by the National Institute of Aerospace.
MIT senior Amnahir Peña-Alcántara, from the Bronx, New York, has been selected as one of this year’s 69 Knight-Hennessy Scholars. After graduating in June with a bachelor of science in materials science and engineering, Peña-Alcántara will begin PhD studies this fall at Stanford University School of Engineering. She aspires to create affordable, wearable-technology clothing that offers sustainable solutions to environmental and public health issues.
The Knight-Hennessy Scholars program, now in its second year, funds the full cost of graduate education at Stanford University and aims to develop future interdisciplinary global leaders committed to tackling the world’s most complex challenges. For its 2019 cohort, the program received over 4,400 applications from students around the world. Scholars are selected based on their academic excellence, independence of thought, purposeful leadership, and civic mindset.
Dance was an integral part of Peña-Alcántara’s Dominican culture while she was growing up, but childhood asthma limited her participation. Instead, she became fascinated by the dancers’ costumes and how they contributed to expression of movement. During her first year at MIT, Peña-Alcántara visited Ministry of Supply, a startup founded by MIT graduates, where she discovered that fashion offers many engineering as well as decorative possibilities. She began to investigate how biofabrics could enhance public health by filtering carbon dioxide and improving air quality.
Peña-Alcántara has conducted research on novel fibers and wearable technology with Professor Yoel Fink’s Fibers@mit lab; Professor Neri Oxman’s group at the MIT Media Lab; Institute Professor Robert Langer’s MIT lab; Professor John Rogers’ Center for Bio-Integrated Electronics at Northwestern University; and Professor Hazel Assender’s polymers lab at Oxford University during a junior year departmental exchange. As an intern, she has integrated temperature sensors into fiber with the Advanced Functional Fabrics of America, analyzed fabric design processes with Himatsingka Seide in India, and repaired costumes for the Boston Ballet. She has also worked as a research assistant at labs in the United States, Bahamas, and China, and as a surgeon assistant in the Dominican Republic. Prior to matriculating at MIT, Peña-Alcántara spent a year in China studying Mandarin at Tsinghua University and working on research at Peking University.
Peña-Alcántara, a National Hispanic Recognition Program Scholar, has been the women’s saber squad leader with the MIT varsity fencing team. She has tutored and mentored middle and high school students with the National Society of Black Engineers and she tutors MIT students through the Tau Beta Pi honor society. She is also a member of the Society of Hispanic Professional Engineers.
MIT students interested in applying for the Knight-Hennessy Scholarship should contact Kim Benard, assistant dean of distinguished fellowships, in the Distinguished Fellowships Office at Career Advising and Professional Development. The deadline to apply for the program’s 2020 cohort is Oct. 9.
Builders of genetic circuits face the same quandary as builders of digital circuits: testing their designs. Yet unlike bioengineers, engineers have a simple and universal testing tool — the multimeter — that they can touch to their circuit to measure its performance. “There’s nothing remotely like this in bio,” says Peter Carr, a synthetic biologist in MIT Lincoln Laboratory’s Bioengineering Systems and Technologies Group.
That was, until recently. Carr and researchers in his group have developed a system that they liken to a “biomultimeter.” The system, called PERSIA, uses fluorescent labeling to illuminate different parts of a genetic circuit and allows researchers to measure biological functions — including transcription, translation, and other enzyme activities — in vitro in real-time. A paper describing this work was published online in the journal ACS Synthetic Biology.
“We now have a way to quickly test new genetic code designs in specific genes. We can accelerate how we ask and answer questions with DNA,” says coauthor David Walsh.
For synthetic biologists to engineer cells that work how they want them to — say, to be immune to a virus — they must be able to tell the cell which genes to express. To do this, they use synthetic DNA to program genetic circuits that control the cell’s behavior. By measuring transcription (the process by which a gene’s DNA is copied into RNA) and translation (the process by which that RNA is read to produce proteins) bioengineers can know how well a circuit is working.
Proteins such as green fluorescent protein (GFP) are already a favorite testing tool for gene expression. With such methods, a gene that carries instructions to produce fluorescent pigments can be fused to a gene in a DNA sequence that will produce a protein of interest. The GFP glow lets biologists know that this protein is being produced.
While GFP has been game-changing for bioresearch, it has its limitations. For one, fluorescent proteins are large, comprising over 200 amino acids that eat up the finite resources in an in-vitro test and may interfere with the natural function of the protein of interest. They also take time to emit their signal; thus, GFP is a reporter of the past, and when things don’t go as planned, it’s unclear what went wrong and when.
PERSIA, on the other hand, uses small fluorescent tags appended to the researcher’s protein of interest. Because PERSIA’s tags are so small, they can be attached to proteins anywhere within a genetic circuit, allowing for the study of different parts of it. They also emit faster than GFP by several minutes, more representative of live rates of transcription and translation.
“We are replacing GFPs with something smaller, that is easier to move around and reconfigure, and measuring fluorescent signals as these processes occur,” Carr says.
PERSIA stands for PURExpress-ReAsH-Spinach In-Vitro Analysis and uses the off-the-shelf products listed in its name. PURExpress is the cell-free “soup” that houses the experiments; this type of in-vitro environment allows for the study of specific biological processes apart from a full living organism.
Spinach is the name of the RNA tag that is fused to a gene. It will fluoresce green when the gene’s DNA is transcribed into RNA and as it reacts to a chemical reagent in the PERSIA mixture. A second, peptide tag is also attached to the gene. This tag will fluoresce red as it is translated at the end of a protein and upon reacting with another chemical reagent in the PERSIA mixture called ReAsH.
During an experiment, all of these ingredients — the tag-encoding DNA sequences and the reagents — are mixed together. Fluorescence is monitored. The amount and the activity of RNA and proteins, the products of transcription and translation, are essentially immediately known.
Genetic and clinical designs
Because of its swift fluorescence, PERSIA can accelerate the evaluation of genetic code designs and act as a screening tool for obvious failures. “You can project what is going to happen,” says Scott Wick, first author of the paper. “Weeks if not months of time are saved downstream for prototyping, testing different designs out and doing many more tests before you get into exploring these engineered codes in vivo.”
The developers tested PERSIA by using it to screen their own redesigns of the genetic code of E. coli. Changes in the ReAsH fluorescent signal showed that one of their redesigns caused translation rates to plummet. They then studied which genetic changes specifically might be responsible for this defect and constructed synthetic DNA to evaluate their best guesses. With only a few DNA sequence modifications and tests with PERSIA, their revised gene design was as productive as the original E. coli gene.
“PERSIA let us quickly troubleshoot how protein production had gone way off and revealed how we could bring it back to normal levels,” Carr says.
Beyond its use for reporting on transcription and translation, PERSIA is also valuable for studying how a protein is working. For example, the team extended PERSIA to monitor the activity of the enzyme HIV-1 protease, which matures proteins in the HIV virus and is one of the main targets for anti-HIV drugs. They reconstructed clinical variants of the HIV-1 protease gene known to demonstrate resistance to these drugs. They then added a product to PERSIA that fluoresces with HIV-1 protease activity, using the results to monitor the effects of different anti-HIV drugs on each variant. The variants’ resistance patterns as determined from the PERSIA tests were consistent with their known resistance patterns documented in the Stanford HIV Drug Resistance Database.
“When you think about all of the genetic diversity that a viral infection has, this constellation of variants, you want a quick way of testing that diversity for choosing the best drug therapies,” Carr says. “We can do these resistance tests with PERSIA, many of them, and quicker than was previously possible, and hope to use it for giving patients personalized drug regiments. This way the patient isn’t the experiment.”
Similarly, the PERSIA tool could be applied to health security, allowing researchers to respond to new and potentially devastating diseases by rapidly screening the effectiveness of available medicines.
The research team has been implementing PERSIA at the scale of microliters, in standard multiwell plates. But they also demonstrated the potential for miniaturizing the approach in a microfluidic device, at the scale of nanoliters. They designed a microfluidic device with 96 reactors and rows of valves that automatically control the flow of fluids through the device. As the reagents used in PERSIA can be costly, microfluidics provide a way of performing hundreds of tests simultaneously while using up only trace amounts of these chemicals. Building microfluidic devices to include even larger numbers of parallel reactors is one of the team’s next goals.
Another hope is that other scientists will find PERSIA as useful as it has been for the laboratory’s work. The system came together over five years, during which the team noticed that they could use this same, simple method for answering many different questions about their DNA-based designs.
“PERSIA sets a foundation for quickening the process to scientific discovery,” Wick says. “That’s what I’m most excited about.”
SpringFEST, an annual spring concert coordinated and hosted by MIT’s Undergraduate Association Events Committee (UA) is one of the largest student-run events on campus. This year, the show opened with MIT’s very own artist Solstice Fayemz, followed up by headliner 6LACK, a three-time Grammy Award-nominated singer and songwriter.
Alice Zhang, a junior and co-chair of the UA Events Committee says the work into planning MIT’s annual SpringFEST concert happens in December. The committee reviews student survey data from last year’s event and work with their advisor from the Student Activities Office (SAO) and an agent on selecting an artist.
“The best part about being co-chairs of UA events is the sense of satisfaction you feel when the event is over. You are really able to see all the moving parts that come together to make it possible and appreciate it’s success,” says Zhang.
Nearly 800 tickets were sold for this year’s event and despite the inclement weather, it brought many students, smiles, laughter, and groovy dance moves to Johnson Ice Rink.
Zhang mentions that being a part of the UA Events Committee provides a great opportunity to be involved in planning events that are an important aspect of student life.
“Having a spring concert is a classic undergraduate student event, and it is great to see so many students from different parts of campus coming out to enjoy what we have planned!” says Zhang.
MIT and the U.S. Air Force have signed an agreement to launch a new program designed to make fundamental advances in artificial intelligence that could improve Air Force operations while also addressing broader societal needs.
The effort, known as the MIT-Air Force AI Accelerator, will leverage the expertise and resources of MIT and the Air Force to conduct fundamental research directed at enabling rapid prototyping, scaling, and application of AI algorithms and systems. The Air Force plans to invest approximately $15 million per year as it builds upon its five-decade relationship with MIT.
The collaboration is expected to support at least 10 MIT research projects addressing challenges that are important to both the Air Force and society more broadly, such as disaster response and medical readiness.
“This collaboration is very much in line with MIT’s core value of service to the nation,” says Maria Zuber, MIT’s vice president for research and the E.A. Griswold Professor of Geophysics. “MIT researchers who choose to participate will bring state-of-the-art expertise in AI to advance Air Force mission areas and help train Air Force personnel in applications of AI.”
Under the agreement, MIT will form interdisciplinary teams of researchers, faculty, and students whose work focuses on topics in artificial intelligence, control theory, formal methods, machine learning, robotics, and perception, among other fields. Teams will also include leaders in technology policy, history, and ethics from a range of departments, labs, and centers across the Institute. Members of the Air Force will join and lend expertise to each team.
“MIT is the leading institution for AI research, education, and application, making this a huge opportunity for the Air Force as we deepen and expand our scientific and technical enterprise,” Secretary of the Air Force Heather Wilson says. “Drawing from one of the best of American research universities is vital.”
The AI Accelerator can include faculty, staff, and students in all five MIT schools, and will be a component of the new MIT Stephen A. Schwarzman College of Computing, opening this fall. The college will take a strongly interdisciplinary approach to computing, and focus on the societal implications of computing and AI. The MIT-Air Force program will be housed in MIT’s Beaver Works facility, an innovation center located in the Technology Square block of Kendall Square. MIT Lincoln Laboratory, a U.S. Department of Defense federally funded research and development center, will make available its specialized facilities and resources to support Air Force mission requirements.
“Our objective is to advance the underlying science behind AI and facilitate societal applications, including helping create solutions in fields like disaster relief and medical preparedness that are of interest to the Air Force,” says Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We plan to assemble interdisciplinary teams that will collaborate across disparate fields of AI to create new algorithms and solutions.”
The AI Accelerator research program will aim to develop new algorithms and systems to assist complex decision-making that might help the Air Force, for example, better focus its maintenance efforts — an expensive and critical part of its aircraft operations. This fundamental research also intends to develop AI to assist humans in aspects of planning, control, and other complex tasks. Finally, the work aims to enable rapid deployment of advanced algorithms and capabilities developed at MIT to foster AI innovation across the country.
In addition to disaster relief and medical readiness, other possible research areas may include data management, maintenance and logistics, vehicle safety, and cyber resiliency.
“The AI Accelerator provides us with an opportunity to develop technologies that will be vectors for positive change in the world,” Rus says. “This new project will integrate societal implications into research from the outset.”
“MIT continues to pursue research that addresses current problems, while training researchers to think through the implications for tomorrow as research is translated to new technologies and new problems,” adds Krystyn Van Vliet, associate provost and professor of materials science and engineering and of biological engineering. “The MIT-Air Force AI Accelerator allows MIT to demonstrate that concept when AI provides one of the tools for human decisions.
Anna Sappington’s first moments of fame came when she was a young girl, living in a home so full of pets she calls it a zoo. She grew up on the Chesapeake Bay, surrounded by a lush environment teeming with wildlife, and her father was an environmental scientist. One day, when she found a frog in a skip laurel bush, she named him Skippy and built him a habitat. Later on, she and Skippy appeared on the Animal Planet TV special “What’s to Love About Weird Pets?”
Now a senior majoring in computer science and molecular biology, Sappington has been chosen for another prestigious honor: She’s one of five MIT students selected this year to be Marshall Scholars. She chose to study computer science because she wanted to have a role in pulling apart and understanding data, and she chose biology because of her lifelong fascination with nature, cells, genetic inheritance — and, of course, Skippy.
“My interests have grown and expanded in different ways, but they’re still kind of rooted in this natural dual passion that I have for both of these fields,” she says.
An eye for genomic research
When Sappington came to MIT, it was right after her first summer internship at the National Institutes of Health, where she examined genes that could be related to increased risk of cardiovascular disease. It was her first experience working with data on human patients, and it inspired her to continue working in medical research.
When she was a first-year student, Sappington spent the year at the Koch Institute, working with a graduate student to determine how liver cells respond to infection by hepatitis B virus. The summer after that, she went back to the NIH to contribute to a different project. This one still involved human health data, but it was more focused on building a computational tool. Sappington helped develop an algorithm that would quickly calculate how similar two genomes or proteins were to each other, a technology that could be used to screen for different bacteria strains in real-time.
“I wanted to kind of get my feet wet in all the different kinds of ways computer science and biology and human health can interact,” she says.
Since her return from the NIH at the beginning of her sophomore year, Sappington has been working in Aviv Regev’s lab in the Broad Institute of MIT and Harvard. She says Regev, a professor in the MIT Department of Biology, has been nothing short of an inspiration.
“She herself is just an incredible role model for the world of computational biology,” Sappington says.
The main initiative of Regev’s lab is an initiative called the Human Cell Atlas, which was recently named Science’s Breakthrough of the Year. It’s like a layer on top of the Human Genome Project, she says. They are working to identify and catalogue the different types of cells, such as skin cells and lung cells. The need for the cataloging comes from the fact that even though these cells have the exact same DNA genome, they have different specialized functions, and therefore can’t be identified by genome alone.
“Within a given tissue, like your skin tissue, cells are actually like a whole collage of different molecular profiles in how they express their genes,” she says. “So while the underlying genome is the same, there’s all sorts of other factors that make your cells express those genes — which turn into proteins — differently.”
Because the human body contains so many different types of cells, teams of researchers work on different pieces. Sappington works on data analysis as part of a team that is classifying retinal cells. It’s a unique challenge, she says, because the retina has more than 40 different types of cells, all of which respond to disease in different ways. While still chipping away at human retinal cell types, her team contributed to a recently published retinal cell atlas for the macaque monkey. For her undergraduate research career, Sappington was named a 2018-2019 Goldwater Scholar.
Dancing, speaking, leading
Before coming to MIT, Sappington had never been involved in dancing. But after she saw a showcase by the Asian Dance Team her first year, she decided to give it a try. After a few semesters dancing with ADT, Sappington also joined MIT DanceTroupe, where she found the culture to be creative, supportive, and incredibly fun.
“[I] just really fell in love with the community, and the general community of dancers at MIT,” she says.
Dance wasn’t the only aspect of the arts and humanities at MIT that she loved. She is also a part of the Burchard Scholars program, which allows students with a particular interest in the humanities to explore that topic. After she took a linguistics class with Professor David Pesetsky her first year, that field became her official humanities concentration. She ended up taking the next level of that class, which centered around syntax, and then she and five other students later created their own special subject class on linguistics.
“Essentially linguistics is the study of how language as a whole works, and the underlying rules that govern it,” she says. “It interfaces with brain and cognitive science, and even computer science, and how language is learned and acquired.”
Outside of class, Sappington has also been involved with TechX, a student-run organization that is responsible for many of MIT’s tech-related events, including HackMIT. Events also include the makeathon MakeMIT, the spring career fair and technology demo xFair, and high school mentoring program THINK. After serving on and running an event committee, Sappington served as the overall director for TechX in her junior year. While she’s no longer in charge, she’s still grateful to be part of the team.
“The whole thing was like one big family. … Each committee has its own intercommittee pride with the event that they run, but then everyone also has to rely on each other,” she says.
Machine learning across the pond
After graduation, Sappington will be heading off to University College London to earn her MS in machine learning. Her goal is to explore machine learning in a context that isn’t biology, so that she can learn new and different approaches that she might later be able to apply to biological challenges. The second year of her Marshall Scholarship will be spent at Cambridge University, where she will do a full year of research, likely involving machine learning applied to health care or other biological questions.
Her ultimate goal is to find new and better ways to use machine learning and technology to improve the health care system. To that end, she aims to get her MD/PhD after the next two years in England. After volunteering at the Massachusetts General Hospital and shadowing doctors in the Boston area, Sappington is pretty certain she wants a career where she can interact with patients while still being involved with computer science and biology. She’s excited to move forward with the next chapter of her life — but when it comes to leaving MIT, she’s got understandably mixed feelings.
“I think no matter where I would be going after graduation, it’s bittersweet to leave the incredible community that is the MIT community,” she says.
Celebrated architect I.M. Pei ’40 died on May 16 in New York City. He was 102.
Over the course of a long international career, he designed notable buildings that included museums, cultural and research centers, civic buildings, and office towers. A dedicated modernist, he received the architecture world’s highest honors for his large body of work.
Among his best-known projects are the glass pyramid entrance pavilion he designed for the Louvre museum in Paris, and the East Building of the National Gallery of Art in Washington.
In 1964, Jacqueline Kennedy chose Pei to design the John F. Kennedy Presidential Library and Museum in Dorchester, Massachusetts. Other Boston-area projects include the west wing of the Museum of Fine Arts.
“Pei was a giant whose vast and varied output consistently rose to the civic responsibility of architecture, elevating cultural, institutional, and residential buildings alike to monuments of modern life,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning.
Pei designed four buildings for the MIT campus:
- Cecil and Ida Green Building for Earth Sciences (Building 54), 1962
- Camille Edouard Dreyfus Chemistry Building (Building 18), 1967
- Ralph Landau Building for Chemical Engineering (Building 66), 1976
- Wiesner Building (Building E15, original home of the MIT Media Lab), 1985
“Pei's contribution to the physical environment of MIT has been significant, with several key buildings that established the form of the contemporary campus,” says Andrew Scott, professor and acting head of the Department of Architecture. “Building 66, which poses a triangular form that finely resolves the forces of the urban geometry, and Building 18, which elegantly frames the landscape of the inner quads, are still personal favorites and outstanding laboratory typologies to this day.”
Ieoh Ming Pei was born on April 26, 1917, in Canton (now Guangzhou), China. The son of a prominent banker, he grew up in Shanghai and Hong Kong. Pei began his college studies at the University of Pennsylvania before transferring to MIT, from which he graduated in 1940 with a bachelor of architecture degree. His thesis title was “Standardized Propaganda Units for War Time and Peace Time China.”
He met his wife, Eileen (Ay-Ling) Loo, also from China, while he was at MIT and she was studying art at Wellesley College. They married when she graduated, in 1942; both then pursued graduate study at Harvard University, from which he received a master’s degree in 1946.
After teaching briefly at Harvard, Pei worked for New York commercial real estate developer William Zeckendorf for 12 years. During this time he hired a former student, Henry Cobb, with whom he would be professionally associated for six decades. Pei founded his own firm, I.M. Pei and Associates (later Pei, Cobb and Freed), in 1955 with Cobb and Eason Leonard. Among a vast number of projects, the firm produced the 700-foot-tall John Hancock Tower in Boston, designed by Cobb.
“I.M. Pei’s work across many contexts and cultures has an enduring, timeless quality,” says Scott. “He was a master-architect with a deep understanding and sophistication with issues of urbanism, scale of object and detail, spatial orchestration, and formal composition.”
Among his many awards and honors were the Pritzker Prize in 1983; the Gold Medal of the American Institute of Architects in 1979; and the Japanese Praemium Imperiale, for lifetime achievement, in 1989.
Pei was member of the MIT Corporation (1972–1977 and 1978–1983) and an honorary member of the Council for the Arts at MIT.
He is survived by sons Li Chung Pei and Chien Chung Pei, both architects; daughter Liane Pei, a lawyer; and grandchildren and great-grandchildren. His son T’ing Chung Pei MCP ‘67, an urban planner, died in 2003. Eileen Pei died in 2014.
Do you have challenges in your professional or personal life to which you seemingly have no answers? Most of us do. Hal Gregersen, executive director of the MIT Leadership Center, author, and a motivational speaker, says we’re usually stuck because we’re asking the wrong question.
So, how do we figure out the right one?
That question sent Gregersen on a research quest including more than 200 interviews with some of the world’s most creative leaders, such as Elon Musk and Jeff Bezos. His latest book, "Questions Are the Answer" (Harper Collins, 2018), delivers the insights he gained about the conditions that give rise to catalytic questions — questions that dissolve barriers to creative thinking and channel the pursuit of solutions into new, accelerated pathways. This work has also led to Gregersen’s newest MIT Sloan Executive Education program launching in July, Questions Are the Answer: A Breakthrough Approach to Creative Problem Solving, Innovation, and Change.
Banish your blind spots
The power and privilege of the C-suite can leave leaders insulated from internal trouble, external signals, and important insights. This “CEO bubble” creates a dangerous disconnect for leaders who must recognize when a major change in direction is required, yet often lack the information required to perceive a looming threat or opportunity.
This “isolation” challenge at the apex of organizations surfaced again and again in Gregersen’s research. This predicament prevents leaders from getting a true sense of corporate performance, innovation, culture, morale, outcomes, and other critically important issues.
While persistent CEOs may eventually get the information they request, it’s the questions they didn’t know to ask that often come back to haunt them. These unanticipated risks — or “unknown unknowns” — are business threats that can come out of nowhere. “Just ask the executives of the GPS device makers that were rendered irrelevant by free navigation apps on phones,” says Gregersen, “or the taxi businesses upended by ordinary car owners selling rides through Uber and Lyft.”
Fortunately, the dangerous territory of unknown unknowns can be lit up by an insightful question. Gregersen has numerous examples of leaders who find the answer to a serious challenge by reframing the questions they asked. For example, the entrepreneur who created GoldieBlox, Debbie Sterling, wondered, “Why are all the great building toys made for boys?” Or, consider Nobel laureate Richard Thaler, who questioned, “Would it change economic theory if we stopped pretending people were rational?” Marc Benioff, founder of Salesforce, asked, “Why are we still loading and upgrading software when we have the internet?” And Rod Drury, founder of Xero, routinely asks, “What is the exact opposite of what an incumbent would expect us to do,” in order to challenge the industry.
Asking such questions is essential in today’s world where globalization, digitization, and disruption push leaders to the edge of uncertainty and urge them to figure out what they don’t know they don’t know — before it’s too late.
While leaders can’t formulate brilliant questions on command, they can increase the chances that flashes of insight will occur by understanding the conditions that give rise to them and pursuing those conditions.
How often do you spend time in strikingly different places? How often do you talk with people who are extraordinarily different from you? Gregersen’s work explores how the most innovative leaders recognize their isolated positions and create intentional, formulaic strategies to fix this. They seek out new people, places, and experiences — often uncomfortable situations — that force them to uncover what they didn’t know they didn’t know and receive raw, unadulterated feedback. During the internet’s early years, Benioff traveled the world seeking new insights from dozens of strikingly different people. He and his senior leaders now regularly go on global “listening tours,” looking for weak strategic signals.
Leaving your comfort zone puts you into a heightened state of alertness. You become extra receptive as you struggle to get your bearings or get on top of a disconcerting situation. When you’re in a situation like this, fresh questions race through your mind. And one of those questions just might be a game changer.
Held in July and again in October, Questions Are the Answer aims to provide executives with frameworks and behavioral habits for cultivating an inquiry-driven approach to leadership and life. Gregersen will be joined by INSEAD Professor Roger Lehman and Executive Coach Kristen Kolakowski, who together will engage participants in hands-on discovery and deliver unique insights into the behaviors of extraordinary leaders that result in game-changing questions and organization-wide change. INSEAD is a graduate business school with campuses in Europe, Asia, and the Middle East.
Trying to duplicate the power of the sun for energy production on earth has challenged fusion researchers for decades. One path to endless carbon-free energy has focused on heating and confining plasma fuel in tokamaks, which use magnetic fields to keep the turbulent plasma circulating within a doughnut-shaped vacuum chamber and away from the walls. Fusion researchers have favored contouring these tokamak plasmas into a triangular or D shape, with the curvature of the D stretching away from the center of the doughnut, which allows plasma to withstand the intense pressures inside the device better than a circular shape.
Led by research scientists Alessandro Marinoni of MIT's Plasma Science and Fusion Center (PSFC) and Max Austin, of the University of Texas at Austin, researchers at the DIII-D National Fusion Facility have discovered promising evidence that reversing the conventional shape of the plasma in the tokamak chamber can create a more stable environment for fusion to occur, even under high pressure. The results were recently published in Physical Review Letters and Physics of Plasmas.
Marinoni first experimented with the “reverse-D” shape, also known as “negative triangularity,” while pursuing his PhD on the TCV tokamak at Ecole Polytechnique Fédérale de Lausanne, Switzerland. The TCV team was able to show that negative triangularity helps to reduce plasma turbulence, thus increasing confinement, a key to sustaining fusion reactions.
“Unfortunately, at that time, TCV was not equipped to operate at high plasma pressures with the ion temperature being close to that of electrons,” notes Marinoni, “so we couldn’t investigate regimes that are directly relevant to fusion plasma conditions.”
Growing up outside Milan, Marinoni developed an interest in fusion through an early passion for astrophysical phenomena, hooked in preschool by the compelling mysteries of black holes.
“It was fascinating because black holes can trap light. At that time I was just a little kid. As such, I couldn’t figure out why the light could be trapped by the gravitational force exerted by black holes, given that on Earth nothing like that ever happens.”
As he matured he joined a local amateur astronomy club, but eventually decided black holes would be a hobby, not his vocation.
“My job would be to try producing energy through nuclear fission or fusion; that’s the reason why I enrolled in the nuclear engineering program in the Polytechnic University of Milan.”
After studies in Italy and Switzerland, Marinoni seized the opportunity to join the PSFC’s collaboration with the DIII-D tokamak in San Diego, under the direction of MIT professor of physics Miklos Porkolab. As a postdoc, he used MIT’s phase contrast imaging diagnostic to measure plasma density fluctuations in DIII-D, later continuing work there as a PSFC research scientist.
Max Austin, after reading the negative triangularity results from TCV, decided to explore the possibility of running similar experiments on the DIII-D tokamak to confirm the stabilizing effect of negative triangularity. For the experimental proposal, Austin teamed up with Marinoni and together they designed and carried out the experiments.
"The DIII-D research team was working against decades-old assumptions,” says Marinoni. “It was generally believed that plasmas at negative triangularity could not hold high enough plasma pressures to be relevant for energy production, because of macroscopic scale Magneto-Hydro-Dynamics (MHD) instabilities that would arise and destroy the plasma. MHD is a theory that governs the macro-stability of electrically conducting fluids such as plasmas. We wanted to show that under the right conditions the reverse-D shape could sustain MHD stable plasmas at high enough pressures to be suitable for a fusion power plant, in some respects even better than a D-shape."
While D-shaped plasmas are the standard configuration, they have their own challenges. They are affected by high levels of turbulence, which hinders them from achieving the high pressure levels necessary for economic fusion. Researchers have solved this problem by creating a narrow layer near the plasma boundary where turbulence is suppressed by large flow shear, thus allowing inner regions to attain higher pressure. In the process, however, a steep pressure gradient develops in the outer plasma layers, making the plasma susceptible to instabilities called edge localized modes that, if sufficiently powerful, would expel a substantial fraction of the built-up plasma energy, thus damaging the tokamak chamber walls.
DIII-D was designed for the challenges of creating D-shaped plasmas. Marinoni praises the DIII-D control group for “working hard to figure out a way to run this unusual reverse-D shape plasma.”
The effort paid off. DIII-D researchers were able to show that even at higher pressures, the reverse-D shape is as effective at reducing turbulence in the plasma core as it was in the low-pressure TCV environment. Despite previous assumptions, DIII-D demonstrated that plasmas at reversed triangularity can sustain pressure levels suitable for a tokamak-based fusion power plant; additionally, they can do so without the need to create a steep pressure gradient near the edge that would lead to machine-damaging edge localized modes.
Marinoni and colleagues are planning future experiments to further demonstrate the potential of this approach in an even more fusion-power relevant magnetic topology, based on a “diverted” tokamak concept. He has tried to interest other international tokamaks in experimenting with the reverse configuration.
“Because of hardware issues, only a few tokamaks can create negative triangularity plasmas; tokamaks like DIII-D, that are not designed to produce plasmas at negative triangularity, need a significant effort to produce this plasma shape. Nonetheless, it is important to engage the fusion community worldwide to more fully establish the data base on the benefits of this shape.”
Marinoni looks forward to where the research will take the DIII-D team. He looks back to his introduction to tokamak, which has become the focus of his research.
“When I first learned about tokamaks I thought, ‘Oh, cool! It’s important to develop a new source of energy that is carbon free!’ That is how I ended up in fusion.”
This research is sponsored by the U.S. Department of Energy Office of Science's Fusion Energy Sciences, using their DIII-D National Fusion Facility.
The ultimate degree of control for engineering would be the ability to create and manipulate materials at the most basic level, fabricating devices atom by atom with precise control.
Now, scientists at MIT, the University of Vienna, and several other institutions have taken a step in that direction, developing a method that can reposition atoms with a highly focused electron beam and control their exact location and bonding orientation. The finding could ultimately lead to new ways of making quantum computing devices or sensors, and usher in a new age of “atomic engineering,” they say.
The advance is described today in the journal Science Advances, in a paper by MIT professor of nuclear science and engineering Ju Li, graduate student Cong Su, Professor Toma Susi of the University of Vienna, and 13 others at MIT, the University of Vienna, Oak Ridge National Laboratory, and in China, Ecuador, and Denmark.
“We’re using a lot of the tools of nanotechnology,” explains Li, who holds a joint appointment in materials science and engineering. But in the new research, those tools are being used to control processes that are yet an order of magnitude smaller. “The goal is to control one to a few hundred atoms, to control their positions, control their charge state, and control their electronic and nuclear spin states,” he says.
While others have previously manipulated the positions of individual atoms, even creating a neat circle of atoms on a surface, that process involved picking up individual atoms on the needle-like tip of a scanning tunneling microscope and then dropping them in position, a relatively slow mechanical process. The new process manipulates atoms using a relativistic electron beam in a scanning transmission electron microscope (STEM), so it can be fully electronically controlled by magnetic lenses and requires no mechanical moving parts. That makes the process potentially much faster, and thus could lead to practical applications.
Using electronic controls and artificial intelligence, “we think we can eventually manipulate atoms at microsecond timescales,” Li says. “That’s many orders of magnitude faster than we can manipulate them now with mechanical probes. Also, it should be possible to have many electron beams working simultaneously on the same piece of material.”
“This is an exciting new paradigm for atom manipulation,” Susi says.
Computer chips are typically made by “doping” a silicon crystal with other atoms needed to confer specific electrical properties, thus creating “defects’ in the material — regions that do not preserve the perfectly orderly crystalline structure of the silicon. But that process is scattershot, Li explains, so there’s no way of controlling with atomic precision where those dopant atoms go. The new system allows for exact positioning, he says.
The same electron beam can be used for knocking an atom both out of one position and into another, and then “reading” the new position to verify that the atom ended up where it was meant to, Li says. While the positioning is essentially determined by probabilities and is not 100 percent accurate, the ability to determine the actual position makes it possible to select out only those that ended up in the right configuration.
The power of the very narrowly focused electron beam, about as wide as an atom, knocks an atom out of its position, and by selecting the exact angle of the beam, the researchers can determine where it is most likely to end up. “We want to use the beam to knock out atoms and essentially to play atomic soccer,” dribbling the atoms across the graphene field to their intended “goal” position, he says.
“Like soccer, it’s not deterministic, but you can control the probabilities,” he says. “Like soccer, you’re always trying to move toward the goal.”
In the team’s experiments, they primarily used phosphorus atoms, a commonly used dopant, in a sheet of graphene, a two-dimensional sheet of carbon atoms arranged in a honeycomb pattern. The phosphorus atoms end up substituting for carbon atoms in parts of that pattern, thus altering the material’s electronic, optical, and other properties in ways that can be predicted if the positions of those atoms are known.
Ultimately, the goal is to move multiple atoms in complex ways. “We hope to use the electron beam to basically move these dopants, so we could make a pyramid, or some defect complex, where we can state precisely where each atom sits,” Li says.
This is the first time electronically distinct dopant atoms have been manipulated in graphene. “Although we’ve worked with silicon impurities before, phosphorus is both potentially more interesting for its electrical and magnetic properties, but as we’ve now discovered, also behaves in surprisingly different ways. Each element may hold new surprises and possibilities,” Susi adds.
The system requires precise control of the beam angle and energy. “Sometimes we have unwanted outcomes if we’re not careful,” he says. For example, sometimes a carbon atom that was intended to stay in position “just leaves,” and sometimes the phosphorus atom gets locked into position in the lattice, and “then no matter how we change the beam angle, we cannot affect its position. We have to find another ball.”
In addition to detailed experimental testing and observation of the effects of different angles and positions of the beams and graphene, the team also devised a theoretical basis to predict the effects, called primary knock-on space formalism, that tracks the momentum of the “soccer ball.” “We did these experiments and also gave a theoretical framework on how to control this process,” Li says.
The cascade of effects that results from the initial beam takes place over multiple time scales, Li says, which made the observations and analysis tricky to carry out. The actual initial collision of the relativistic electron (moving at about 45 percent of the speed of light) with an atom takes place on a scale of zeptoseconds — trillionths of a billionth of a second — but the resulting movement and collisions of atoms in the lattice unfolds over time scales of picoseconds or longer — billions of times longer.
Dopant atoms such as phosphorus have a nonzero nuclear spin, which is a key property needed for quantum-based devices because that spin state is easily affected by elements of its environment such as magnetic fields. So the ability to place these atoms precisely, in terms of both position and bonding, could be a key step toward developing quantum information processing or sensing devices, Li says.
“This is an important advance in the field,” says Alex Zettl, a professor of physics at the University of California at Berkeley, who was not involved in this research. “Impurity atoms and defects in a crystal lattice are at the heart of the electronics industry. As solid-state devices get smaller, down to the nanometer size scale, it becomes increasingly important to know precisely where a single impurity atom or defect is located, and what are its atomic surroundings. An extremely challenging goal is having a scalable method to controllably manipulate or place individual atoms in desired locations, as well as predicting accurately what effect that placement will have on device performance.”
Zettl says that these researchers “have made a significant advance toward this goal. They use a moderate energy focused electron beam to coax a desirable rearrangement of atoms, and observe in real-time, at the atomic scale, what they are doing. An elegant theoretical treatise, with impressive predictive power, complements the experiments.”
Besides the leading MIT team, the international collaboration included researchers from the University of Vienna, the University of Chinese Academy of Sciences, Aarhus University in Denmark, National Polytechnical School in Ecuador, Oak Ridge National Laboratory, and Sichuan University in China. The work was supported by the National Science Foundation, the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies, the Austrian Science Fund, the European Research Council, the Danish Council for Independent Research, the Chinese Academy of Sciences, and the U.S. Department of Energy.
As part of its ongoing efforts to advance the Kendall Square innovation ecosystem and contribute to the vibrancy of the neighborhood, MIT has entered into an arrangement involving the three buildings known as the Osborn Triangle. MIT News spoke with Israel Ruiz, MIT’s executive vice president and treasurer, about the announcement and its implications for the Institute and the Kendall area.
Q: What is being announced today?
A: MIT has completed a transaction that enabled Harrison Street Real Estate to acquire the three-building complex known as Osborn Triangle. This complex includes 610 Main Street, 1 Portland Street, and 700 Main Street, and is currently leased to Pfizer, Novartis, and Lab Central. Under this arrangement, MIT will retain ownership of the land and will “ground lease” the complex to a new joint venture that will be led by Harrison Street Real Estate and will include Bulfinch Companies and MIT, which will retain a minority interest in the transaction.
The Osborn Triangle complex was developed by MIT over the past two decades and has helped transform what used to be a parking lot and vacant building into an anchor of the innovation ecosystem in Kendall Square. This is a great example of the positive impact that MIT’s investment in the area can have. The formation of the joint venture with new investment partners will allow MIT to continue to reinvest in the area surrounding its campus and contribute to a vibrant and inclusive Kendall Square. Through the Kendall Square Initiative, construction is currently underway that will bring a mix of uses to the area, including academic, commercial research, dormitory, market-rate and affordable housing, and retail, as well as a much-needed grocery store that will open later this year.
Q: What is a ground lease?
A: A ground lease is a mechanism that allows MIT to reduce its investment in a property by giving another entity rights to operate the property for a period of time, after which the property will revert back to MIT control. This structure enables MIT to redeploy capital to new investments in the area, as it is doing through the Kendall Square Initiative and its acquisition of 10 acres of land from the federal government as part of the Volpe transaction. Investment partners help MIT to continue to further improve the area surrounding its campus.
Q: Is this the first time MIT has done this?
A: No, MIT has entered into many ground lease transactions over the years. Today’s transaction is highly consistent with MIT’s long-held philosophy of collaborating with others to invest in the area while ensuring that MIT retains ownership of the land over the long term. Technology Square and University Park are two recent examples of this approach. We’re excited to add Osborn Triangle to this list.
The winner of Wednesday’s MIT $100K Entrepreneurship Competition was a startup helping oil well owners remotely monitor and control the pumping of their wells, increasing production while reducing equipment failures and cutting methane emissions.
Acoustic Wells, a team including two MIT postdocs, was awarded the grand prize after eight finalist teams pitched their projects to judges and hundreds of attendees at Kresge Auditorium. T-var EdTech, a company developing phonics-based devices that help children learn to read, earned the $10,000 audience choice prize.
The MIT $100K, MIT’s largest entrepreneurship competition, celebrated its 30th anniversary this year and featured talks from $100K co-founder Peter Mui ’82 and MassChallenge founder John Harthorne MBA ’07, who won the $100K grand prize in 2007.
Mui reflected on how much the program has grown since he and his classmates first had the idea in 1989, back when it was the MIT $10K. Harthorne talked about the inspiration he got as an MBA candidate from his MIT classmates tackling some of the world’s biggest problems.
“MIT taught me to dream big, and that’s what this event is all about,” Harthorne said to the crowd at the sold out auditorium. “Every one of the teams competing tonight could go on to do great things.”
Improving oil well pumping
The majority of North America’s 1.4 million oil and gas wells are run by independent owners operating batches of hundreds or thousands of aging wells. Working with thin profit margins and older equipment, the owners rely on small teams of workers to manually inspect each well in a yearlong, labor-intensive, daily process.
When setting up their pumping equipment, each owner must strike a balance: If they set up the wells to pump too slowly, they risk leaving oil in the ground and losing much-needed revenue. If they pump too fast, they risk breaking their equipment and causing pollution.
“The result [of pumping too fast] is similar to when you’re drinking with a straw from a cup and there’s nothing left, so you hear that bubble sound,” Acoustic Wells founder and CEO Sebastien Mannai SM ’14 PhD ’18 told the audience. “The same thing happens with oil wells, but on a much bigger scale.”
In the case of oil wells, those “bubbles” are pockets of methane that enter the pump and cause it to fail, unleashing unnecessary greenhouse gases in the process.
To address this problem, Acoustic Wells is developing an “internet of things” device based on a novel sensor and an online cloud solution to help well owners control their equipment using real-time pumping data.
Dr. Mannai, a postdoc in the Department of Aeronautics and Astronautics, compared the device’s sensor to a stethoscope. It works via a sensor similar to a microphone connected to the wellhead at the surface. The sensor records the sound of the pump and a field computer processes the data on the edge before sending the results to a cloud-based system for real-time analysis. Owners can view the processed data on a dashboard and remotely send orders to the well to change its pump settings, simplifying the inspection and control processes.
The company has already conducted field tests with an early version of its solution on 30 wells across Oklahoma, Texas, and Louisiana. In those tests, the solution was able to detect key issues and the wells were adjusted to increase their efficiency and reduce their emissions, Mannai says.
The team, which also includes Dr. Charles-Henri Clerget, a postdoctoral associate in the Department of Mathematics, and Louis Creteur, the IoT and cloud achitect of the company Leanbox, will use the winnings from the $100K competition to hire it's first employees and continue to scale its user base.
The company is initially targeting independent well owners in North America. It plans to commercialize its product as a Software as a Service platform (SaaS).
Overall, Acoustic Wells believes its solution could save independent well owners $6 billion annually while preventing the methane equivalent of 240 million tons of carbon dioxide.
Teaching kids to read
T-var EdTech has developed a product called The Read Read that acts as a sound board when blocks of letters are placed on it, in order to mimic phonics, a proven method for teaching children to read.
Phonics has historically required adults to sound out letters and words with children as they read. The process is time-consuming and best done one-on-one. The Read Read allows children to use phonics on their own.
The letters that come with the device represent the major English speech sounds. Children place the letters on the device and, when touched, it sounds out the letters. In the first version of its device, the company has placed braille underneath large, black letters that contrast with the white block to help children who are blind or visually impaired learn braille.
In a pilot with the Perkins School for the Blind, students previously classified as nonreaders learned to read using the company’s device, according to founder Alex Tavares, a graudate of Harvard University’s Graduate School of Education.
The company has begun preselling to parents and schools and has partnered with LC Industries, one of the largest employers of adults with visual impairments in the U.S.
“Phonics works, but it’s not scalable in its current implementation,” Tavares told the audience. “The Read Read scales phonics by allowing kids to practice independently. Finally, phonics is accessible to all kids.”
Wednesday night’s competition was the culmination of a process that began in the winter for semifinalist teams, who received funding and mentoring to develop comprehensive business plans around their ideas.
The event was run by students and supported by the Martin Trust Center for MIT Entrepreneurship and the MIT Sloan School of Management.
This year’s judges were TJ Parker, co-founder and CEO of PillPack; Mira Wilczek ’04 MBA ’09, president and CEO of Cogo Labs; Thomas Collet PhD ’91, president and CEO of Phrixus Pharmaceuticals; Tanguy Chau SM ’10 PhD ’10 MBA ’11, an investor and co-founder of Corvium; and Katie Rae, the CEO and managing general partner of The Engine.
Contrary to appearances at times, Europeans have become more receptive to immigration in recent decades. That’s one of multiple new findings from an innovative study on European public opinon co-authored by an MIT political scientist.
The study aggregates public opinion polls from 27 countries over a span of 36 years, offering new insight into broad trends and changes in European politics and society. While some dramatic recent political events, such as Britain’s 2016 vote to leave the European Union, have highlighted anti-immigrant sentiment, the overall picture looks rather different.
“There has been a general liberalizing trend on immigration, which is contrary to a lot of rhetoric and commentary,” says Devin Caughey, an associate professor of political science at MIT and co-author of the study. “Europeans, on average, when you ask them the same questions over time, have given more pro-immigration answers than they did a generation ago.”
As Caughey notes, that may be due to “generational replacement,” as older citizens who view immigration less favorably may be replaced by younger, more pro-immigrant cohorts of people.
The study also suggests European public opinion features a gender gap in a couple of areas. On economic policy, women throughout the continent have favored a more expansive state role during the study’s time period, which runs from 1981 through 2016.
“There’s [long] been this gender gap on economics,” Caughey says. “Women have always been less conservative than men, throughout the period we have. They are always more supportive of welfare spending or government responsibilities for the needy or unemployed.”
In recent years, a gender gap also seems to be opening up on a variety of social issues, including gender equity, gay rights, and abortion rights. However, as the authors note in the paper, this disparity is “much less pronounced” on social issues than on economic matters.
The paper, “Policy Ideology in European Mass Publics, 1981-2016,” has been published online by the American Political Science Review. In addition to Caughey, the authors are Tom O’Grady PhD ’17, an assistant professor in quantitative political science at University College London, and Christopher Warshaw, an assistant professor of political science at George Washington University.
The research on the project began while O’Grady was a doctoral candidate in the MIT Department of Political Science; Warshaw was a faculty member in the department at the time as well.
European geopolitics: The big split
To conduct the study, the researchers built a comprehensive database of multinational, ongoing political surveys conducted in Europe from 1981 to 2016. That includes the European Social Survey, the European Values Survey, parts of the International Social Survey Program, the Pew Global Attitudes Survey, and some Eurobarometer surveys. Overall, the study encompassed about 2.7 million individual survey responses to 109 different questions.
Examining data at such a large scale produced some broad insights, such as differing patterns in public opinion among different regions of Europe.
“There is this very clear geographic cleavage,” Caughey says. “Northern and western Europe tend to be relatively conservative on economic issues, and relatively progressive on social and cultural and immigration issues. The old Eastern Bloc and southern Europe generally tends to be pretty socially conservative, but much more comfortable with a strong government intervention in the economy. Which makes [historical] sense when you’re talking about the former communist countries.”
Given these nuances, the scholars decided not to measure the political views of Europeans along a simple ideological spectrum, such as by asking survey respondents where they place themselves on a left-right “self-placement” scale.
“That is measuring how people conceive of themselves and their identities, as much as their policy preferences,” Caughey says.
Instead, the researchers measured political views in four broad categories: absolute economic views, relative economic views, social issues, and immigration matters. On economic policy, an “absolute” economic view concerns, say, the objective size of a country’s welfare state. A “relative” economic view is whether someone would like to change the size of that welfare state. These things are sometimes conflated in accounts of public opinion.
Thus people in Denmark and Latvia, for instance, have similar views about the ideal absolute size of the state. But as the study shows, citizens of Denmark, whose welfare state is already more expansive than Latvia’s, express much greater relative conservatism.
Immigration and polarization
The researchers also decided to put immigration in its own political category, in part to examine how closely views on the matter connect with other political and social positions.
“They are associated,” Caughey says. “At any given point in time a country that tends to be conservative on immigration, relatively anti-immigrant, and relatively nationalist, also tends to be socially conservative. But they do exhibit somewhat different patterns over time.”
The dynamics of public opinion on immigration are also complex: As immigration becomes a higher-profile issue, a rise in anti-immigration sentiment may also be accompanied by a rise in distinctly pro-immigrant views, and increased political polarization on the subject.
“It’s possible that where immigration was a low-salience issue, then becomes a salient issue, and anti-immigration parties arise, other people react to that and become more fervent to their pro-immigration views,” Caughey notes. “For [some] people now, being pro-immigrant is a statement of progressive identity. … So there is an interesting interplay between the rise of anti-immigrant parties and the reaction against them.”
Other academic experts on public opinion say the paper is a useful contribution to the field. James Stimson, the Raymond Dawson Distinguished Bicentennial Professor of Political Science at the University of North Carolina, helped advance the discipline with his own research tracking shifts in public opinion over time. Stimson says the approach of the authors to European public opinion “effectively gets more information from the same [existing] data and makes it possible to construct multiple measures for multiple countries. That extension is their unique contribution and an important one.”
For his part, Caughey says the researchers hope their current paper will help generate additional research about European public opinion, in which scholars might evaluate changes in public opinion against issues like changes in secularism or government responsiveness to popular opinion.
Eleven MIT graduating seniors and current graduate students have been named winners in the 2019-2020 Fulbright U.S. Student Fellowship Program. In addition to the 11 students accepting their awards, three applicants from MIT were selected as finalists but decided to decline their grants.
MIT’s newest Fulbright Students will engage in independent research and English teaching assignments in Brazil, the Netherlands, Spain, Russia, Taiwan, and Senegal.
Sponsored by the U.S. Department of State’s Bureau of Educational and Cultural Affairs, the mission of Fulbright is to promote cultural exchange, increase mutual understanding, and build lasting relationships among people of the world. The Fulbright U.S. Student Program offers grants in over 140 countries.
The MIT students were supported in the application process by the Presidential Committee on Distinguished Fellowships, chaired by professors Rebecca Saxe and Will Broadhead, and by MIT’s Distinguished Fellowships Office within Career Advising and Professional Development. The MIT winners are:
Annamarie "Anna" Bair ’18 earned a bachelor of science in computer science and engineering in June 2018 and will receive her master of engineering degree in computer science later this year. In Barcelona, Spain, Bair will engage in complex systems research.
Abigail "Abby" Bertics will graduate in June with a bachelor of science in electrical engineering and computer science. Her research in Yekaterinburg, Russia, will focus on natural language processing methods for understanding English second language acquisition by Russian speakers.
Hope Chen is a senior graduating with a bachelor of science in mechanical engineering. She will be going to Taiwan as an English Teaching Assistant in primary school classrooms. After completing her Fulbright program and returning to the U.S., Chen will matriculate in medical school.
Alexis D’Alessandro will graduate this spring with a bachelor of science in mechanical engineering. For her research in Aracaju, Brazil, she will develop an educational program and chemical sensing tool to promote water safety awareness among children.
Sarah DiIorio will earn her bachelor of science in biological engineering in June. She is headed to Eindhoven, the Netherlands, to conduct medical research related to cartilage regeneration for osteoarthritis.
Katie Fisher is a senior in MIT’s Scheller Teaching Education Program graduating with a bachelor of science in urban studies and planning with a concentration in education. As an English teaching assistant in the Netherlands, Fisher will work with students at a vocational college in Amsterdam.
Miranda McClellan ’18 received a bachelor of science in computer science and engineering in June 2018 and will earn her master of engineering degree in computer science this spring. McClellan will research automated scaling of 5G computer network resources in Barcelona, Spain.
Samira Okudo will graduate in June with a joint bachelor of science in computer science and comparative media studies. As an English teaching assistant in Brazil, she will work with university students training to be English-language instructors.
James Pelletier is a PhD candidate in physics. For his Fulbright research in Madrid, Spain, he will develop biophysical models to investigate how plants process information for cellular resource allocation and agricultural efficiency.
Jonars Spielberg is a third-year doctoral student in the Department of Urban Studies and Planning’s international development program. In Senegal, he will examine how the personal interactions of bureaucrats and farmers shape agricultural policy implementation in the country's main irrigated regions.
Catherine Wu will graduate in June with a bachelor of science in biology. She will be working with university students in Brazil as a Fulbright English Teaching Assistant.
MIT students interested in applying to the Fulbright U.S. Student Program should contact Julia Mongo in Distinguished Fellowships.
When making a complex decision, we often break the problem down into a series of smaller decisions. For example, when deciding how to treat a patient, a doctor may go through a hierarchy of steps — choosing a diagnostic test, interpreting the results, and then prescribing a medication.
Making hierarchical decisions is straightforward when the sequence of choices leads to the desired outcome. But when the result is unfavorable, it can be tough to decipher what went wrong. For example, if a patient doesn’t improve after treatment, there are many possible reasons why: Maybe the diagnostic test is accurate only 75 percent of the time, or perhaps the medication only works for 50 percent of the patients. To decide what do to next, the doctor must take these probabilities into account.
In a new study, MIT neuroscientists explored how the brain reasons about probable causes of failure after a hierarchy of decisions. They discovered that the brain performs two computations using a distributed network of areas in the frontal cortex. First, the brain computes confidence over the outcome of each decision to figure out the most likely cause of a failure, and second, when it is not easy to discern the cause, the brain makes additional attempts to gain more confidence.
“Creating a hierarchy in one’s mind and navigating that hierarchy while reasoning about outcomes is one of the exciting frontiers of cognitive neuroscience,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
MIT graduate student Morteza Sarafyzad is the lead author of the paper, which appears in Science on May 16.
Previous studies of decision-making in animal models have focused on relatively simple tasks. One line of research has focused on how the brain makes rapid decisions by evaluating momentary evidence. For example, a large body of work has characterized the neural substrates and mechanisms that allow animals to categorize unreliable stimuli on a trial-by-trial basis. Other research has focused on how the brain chooses among multiple options by relying on previous outcomes across multiple trials.
“These have been very fruitful lines of work,” Jazayeri says. “However, they really are the tip of the iceberg of what humans do when they make decisions. As soon as you put yourself in any real decision-making situation, be it choosing a partner, choosing a car, deciding whether to take this drug or not, these become really complicated decisions. Oftentimes there are many factors that influence the decision, and those factors can operate at different timescales.”
The MIT team devised a behavioral task that allowed them to study how the brain processes information at multiple timescales to make decisions. The basic design was that animals would make one of two eye movements depending on whether the time interval between two flashes of light was shorter or longer than 850 milliseconds.
A twist required the animals to solve the task through hierarchical reasoning: The rule that determined which of the two eye movements had to be made switched covertly after 10 to 28 trials. Therefore, to receive reward, the animals had to choose the correct rule, and then make the correct eye movement depending on the rule and interval. However, because the animals were not instructed about the rule switches, they could not straightforwardly determine whether an error was caused because they chose the wrong rule or because they misjudged the interval.
The researchers used this experimental design to probe the computational principles and neural mechanisms that support hierarchical reasoning. Theory and behavioral experiments in humans suggest that reasoning about the potential causes of errors depends in large part on the brain’s ability to measure the degree of confidence in each step of the process. “One of the things that is thought to be critical for hierarchical reasoning is to have some level of confidence about how likely it is that different nodes [of a hierarchy] could have led to the negative outcome,” Jazayeri says.
The researchers were able to study the effect of confidence by adjusting the difficulty of the task. In some trials, the interval between the two flashes was much shorter or longer than 850 milliseconds. These trials were relatively easy and afforded a high degree of confidence. In other trials, the animals were less confident in their judgments because the interval was closer to the boundary and difficult to discriminate.
As they had hypothesized, the researchers found that the animals’ behavior was influenced by their confidence in their performance. When the interval was easy to judge, the animals were much quicker to switch to the other rule when they found out they were wrong. When the interval was harder to judge, the animals were less confident in their performance and applied the same rule a few more times before switching.
“They know that they’re not confident, and they know that if they’re not confident, it’s not necessarily the case that the rule has changed. They know they might have made a mistake [in their interval judgment],” Jazayeri says.
By recording neural activity in the frontal cortex just after each trial was finished, the researchers were able to identify two regions that are key to hierarchical decision-making. They found that both of these regions, known as the anterior cingulate cortex (ACC) and dorsomedial frontal cortex (DMFC), became active after the animals were informed about an incorrect response. When the researchers analyzed the neural activity in relation to the animals’ behavior, it became clear that neurons in both areas signaled the animals’ belief about a possible rule switch. Notably, the activity related to animals’ belief was “louder” when animals made a mistake after an easy trial, and after consecutive mistakes.
The researchers also found that while these areas showed similar patterns of activity, it was activity in the ACC in particular that predicted when the animal would switch rules, suggesting that ACC plays a central role in switching decision strategies. Indeed, the researchers found that direct manipulation of neural activity in ACC was sufficient to interfere with the animals’ rational behavior.
“There exists a distributed circuit in the frontal cortex involving these two areas, and they seem to be hierarchically organized, just like the task would demand,” Jazayeri says.
Daeyeol Lee, a professor of neuroscience, psychology, and psychiatry at Yale School of Medicine, says the study overcomes what has been a major obstacle in studying this kind of decision-making, namely, a lack of animal models to study the dynamics of brain activity at single-neuron resolution.
“Sarafyazd and Jazayeri have developed an elegant decision-making task that required animals to evaluate multiple types of evidence, and identified how the two separate regions in the medial frontal cortex are critically involved in handling different sources of errors in decision making,” says Lee, who was not involved in the research. “This study is a tour de force in both rigor and creativity, and peels off another layer of mystery about the prefrontal cortex.”