MIT Latest News
Blade Kotelly is a senior lecturer at MIT on design thinking, user interfaces, and innovation whose enthusiasm for cars is intertwined with his passion for innovative design. Despite Kotelly’s love affair with the internal combustion engine, he realizes the technology is heading for endangered species list. “We are going to see a huge shift to electric cars, not just for the environment but because the total operating cost is lower,” he says.
Kotelly’s brain may have convinced him batteries would rule, but his heart did not follow until he saw Tesla’s upcoming Cybertruck. “The Cybertruck will be one of the most significant shifts in cars in 50 years,” enthuses Kotelly. “It’s a radically different design idea — so simply made with folds and an exoskeleton. Tesla is the only car company that is challenging the underlying assumption on every axis.”
In Kotelly’s work with his innovation consultancy, which includes assessments, keynotes, training classes, and hands-on engagement, he encourages design teams to similarly question prevailing assumptions. “Large companies need to make innovation a part of the way they work every day,” says Kotelly. “Innovation not only needs to happen in the product group, but in HR, sales, marketing — everywhere in the organization.”Building and acquiring innovation
One of the most challenging paradoxes facing businesses today is that success typically requires achieving scale while also demanding continuous innovation. Yet, the larger the organization, the harder it is to innovate.
“Most corporations don’t know how to innovate consistently across the board,” says Kotelly. “Some have some very innovative groups, but innovation is usually not a part of the company’s culture and DNA. Often whatever it was that made a company successful in the beginning doesn’t keep them successful over time.”
Companies have tried to tame the paradox by acquiring innovative startups. Although this is often a smart move, Kotelly suggests that larger companies should also identify how startups achieve innovation and try to imitate it in-house.
“Due to the need to win funding and early customers, startups are forced from the start to figure out who they really are and to understand the core of their product or service,” says Kotelly. “That is part of the key to their success at innovation. Yet when a company acquires the startup, that culture of innovation can evaporate.”
Typically, says Kotelly, the acquiring company “squelches the innovation and its own DNA takes over.” If the parent company applies the right touch, the acquired startup group can continue to innovate, at least for a while. Yet, Kotelly cautions that it is rare that an acquiring company can absorb the startup’s innovative techniques throughout the organization. The exceptions are those companies that can rethink their own processes and goals across the board.
“The world is changing so quickly on so many levels that companies need to learn new techniques and be able to adapt very quickly,” says Kotelly. “We are seeing a lot of shifts around gig economy workers and in consumer expectations of things like delivery of physical goods. There are huge changes in the way media is disseminated and consumed.”
Cultural shifts can also catch companies unprepared. “Design consultants often talk about the intersection of business, technology, and people, but I’d like to add another dimension: culture,” says Kotelly. “Things like speech patterns, styles, and political causes usually shift slowly, but these days the culture is shifting very quickly.”Stakeholder vs. user-centered design
Innovative design starts with fully understanding the problem the product aims to solve and for whom. Design teams should “spend enough time at the very beginning to really sink into the problem space,” says Kotelly. “The organization needs to leverage the core things it does well and seek input from the outside. Back in the '90s we focused on user-centered design, which was great because people had not been thinking enough about users. Yet companies still produced some terrible products or else targeted the products at the wrong people. Even if a product works technically, your customers need to like it.”
Kotelly suggests broadening the concept of user-centered design to encompass stakeholder-centered design. “The stakeholder could be the person using it or the person buying it or even a competitor, each of whom might respond to the product differently,” he says. “Companies need to consider a much richer and more complex network.”Software/hardware integration
Technology companies are increasingly producing both hardware and software, yet it is often a challenge getting the two sides to mesh. “Most companies, and especially smaller companies, that work on software and hardware in-house don’t do one of them well,” says Kotelly. “If you have been excellent at hardware for a long time, you may not be good at software. It is super critical to get people with software and hardware skills to work together well.”
One major difference between software and hardware development is that software is produced much more quickly, says Kotelly. There are also differences in style. “It is particularly important with software development that people communicate with each other effectively, which comes down to good leadership skills. The leaders need to understand how to get the energy to the team and help them identify and solve the paradox.”Voice assistants
After working on the MIT-inspired Jibo social robot before leading the Advanced Concept Lab at Sonos, Kotelly had extensive experience in voice interfaces. Although voice assistants have boomed, there have also been failures, which Kotelly says are usually due to a failure to manage expectations.
“Voice assistant companies often fail to set boundaries about what the assistant can or cannot do well,” says Kotelly. “These products typically run into trouble when they do not state the product’s limitations up front.”
In a typical scenario, a customer attempts a voice query that fails. The company later adds support for the query but fails to adequately inform the customer, in which case he or she is unlikely to try it again. “The end result is all that development effort has been wasted.”
The problem is made more challenging because there is no visual way of seeing all the options on voice products. “It’s hard to form a mental model of the system,” says Kotelly.
Despite improvements to universal voice assistants such as Alexa and Google Assistant, they can still be laughably clueless. AI is still far from the stage in which it can reliably answer every question.
“I believe we will return to more specialized voice assistants,” says Kotelly. “An expert speech system that knows a domain very deeply will let users better anticipate what it might know. For example, a music expert could differentiate between composers and performers to understand that a search for the composer Mozart should return results of music composed by Mozart and recorded by, say, Glen Gould. This is different than a search for the artist Duran Duran, in which case you would expect the recording to be done by the artist. Voice agents can be more helpful by being more specialized.”Social media shifts
Although short-form social media platforms such as Twitter have come to dominate our lives, Kotelly sees problems on the horizon. “People love consuming short-format media, but at some point it’s kind of like candy. People are literally always on and don’t know how to turn things off. This is already leading to mental health problems. People consume too much media because it stimulates their brain. They don’t realize they could be happier if they stopped.”
Technology companies have a responsibility to suggest to over-consuming users that they give it a rest from time to time, says Kotelly. Yet, he adds that the industry also needs to develop longer-form social media formats that are not so ephemeral.
“One problem for short-form social media is that it is not always easy to click through to the primary source material,” says Kotelly. “We have interpretations and derivatives on derivatives, and we’re not getting to the important material itself. We need a better way of surfacing the primary source in a way that is true to the material and helps people understand it. I think we’ll see media produced in multiple different ways that help people with different backgrounds to absorb it.”
People frequently try to participate in political processes, from organizing to hold government to account for providing quality health care and education to participating in elections. But sometimes these systems are set up in a way that makes it difficult for people and government to engage effectively with each other. How can technology help?
In a new how-to guide, Luke Jordan, an MIT Governance Lab (MIT GOV/LAB) practitioner-in-residence, advises on how — and more importantly, when — to put together a team to build such a piece of “civic technology.”
Jordan is the founder and executive director of Grassroot, a civic technology platform for community organizing in South Africa. “With Grassroot, I learned a lot about building technology on a very limited budget in difficult contexts for complex problems,” says Jordan. “The guide codifies some of what I learned.”
While the guide is aimed at people interested in designing technology that has a social impact, some parts might also be useful more broadly to anyone designing technology in a small team.
The “don’t build it” principle
The guide’s first lesson is its title: “Don’t Build It.” Because an app can be designed cheaply and easily, many get built when the designer hasn’t found a good solution to the problem they're trying to solve or doesn’t even understand the problem in the first place.
Koketso Moeti, founding executive director of amandla.mobi, says she is regularly approached by people with an idea for a piece of civic technology. “Often after a discussion, it is either realized that there is something that already exists that can do what is desired, or that the problem was misdiagnosed and is sometimes not even a technical problem,” she says. The “don’t build it” principle serves as a reminder that you have to work hard to convince yourself that your project is worth starting.
The guide offers several litmus tests for whether or not an idea is a good one, one of which is that the technology should help people do something that they’re already trying to do, but are finding it difficult. “Unless you’re the Wright brothers,” says Jordan, “you have to know if people are actually going to want to use this.”
This means developing a deep understanding of the context you’re trying to solve a problem in. Jordan’s original conception of Grassroot was an alert for when services weren’t working. But after walking around and talking to people in communities that might use the product, his team found that people were already alerting each other. “But when we asked, ‘how do people come together when you need to do something about it,’” says Jordan, “we were told over and over, ‘that’s actually really difficult.’” And so Grassroot became a platform activists could use to organize gatherings.
Building a team: hire young engineers
One section of the guide advises on how to put together a team to build a project, such as what qualities one should want in a chief technology officer (CTO) who will help run things; where to look for engineers; and how a tech team should work with one's field staff.
The guide suggests hiring entry-level engineers as a way to get some talented people on board while operating on a limited budget. “When I’ve hired, I’ve tended to find most of the value among very unconventional and raw junior hires,” says Jordan. “I think if you put in the work in the hiring process, you get fantastic people at junior levels.”
“Civic tech is one exciting area where promising young engineers, like MIT students, can apply computer science skills for the public good,” says Professor Lily L. Tsai, MIT GOV/LAB’s director and founder. “The guide provides advice on how you can find, hire, and mentor new talent.”
Jordan says the challenge is that while people in computer science find these “tech for good” projects appealing, they often don’t pay nearly as well as other opportunities. Like in other startup contexts, though, young engineers have the opportunity to learn a lot in an engaging environment. “I tell people, ‘come and do this for a year-and-a-half, two years,’” he says. “‘You’ll get paid perhaps significantly below industry rate, but you’ll get to do a really interesting thing, and you’ll work in a small team directly with the CTO. You’ll get a lot more experience a lot more quickly.’”
How to work: learn early, quickly, and often
Jordan says that both a firm and its engineers must have “a real thirst to learn.” This includes being able to identify when things aren’t working and using that knowledge to make something better. The guide emphasizes the importance of ignoring “vanity metrics,” like the total number of users. They might look flashy and impress donors, but they don’t actually describe whether or not people are using the app, or if it’s helping people engage with their governments. Total user numbers “will always go up except in a complete catastrophe,” Jordan writes in the guide.
The biggest challenge is convincing partners and donors to also be willing to accept mistakes and ignore vanity metrics. Tsai thinks that getting governments to buy into civic tech projects can help create an innovation culture that values failure and rapid learning, and thus leads to more productive work. “Many times, civic tech projects start and end with citizens as users, and leave out the government side,” she says. “Designing with government as an end user is critical to the success of any civic tech project.”
Strategic use of data is vital for progress in science, commerce, and even politics, but at the same time, citizens are demanding more responsible, respectful use of personal data. Internet users have never felt more helpless about how their data are being used: Surveys show that the vast majority of U.S. adults feel that they have little to no control over the data that the government and private companies collect about them. In response to these concerns, new privacy laws are being enacted in Europe, California, Virginia, and elsewhere around the world.
To conduct more-focused research and analysis of these issues, last week MIT launched a new initiative to bring state-of-the-art computer science research together with public policy expertise and engagement.
Launched on April 6, the MIT Future of Data, Trust, and Privacy initiative (FOD) will involve collaboration between experts specializing in five distinct technical areas:
- database systems
- applied cryptography
- AI and machine learning
- data portability and new information architectures; and
- human-computer interaction.
In addition to technical research, FOD will provide forums for dialogue amongst MIT researchers, policymakers, and industry consortium members, with a structure similar to MIT’s 2019 AI Policy Congress, which included members of the Organization for Economic Cooperation and Development.
FOD is a collaboration between MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Internet Policy Research Initiative (IPRI). Co-director Daniel Weitzner is both a researcher at CSAIL and founding director of IPRI, and previously served as the White House deputy CTO under President Obama.
Weitzner says that one of the larger goals is to reduce the cycle time between the development of new policies and new software systems. He also hopes to work with industry to develop new privacy-preserving tools and to help steer conversations focused on “shaping the future of data governance.”
Founding member companies include American Family Insurance, Capital One, and MassMutual. Initiative Co-director Srini Devadas, a professor at MIT, says that the effort will draw on expertise across MIT in the fields of cryptography, machine learning, systems security, and public policy.
“The goal is to solve challenging problems of collaborative data analytics and machine learning where sharing data provides significant benefit to all participants, while also preserving strong privacy protections,” says Devadas.
At the launch event, CSAIL Director Daniela Rus cited MIT’s long history of work in the privacy space, from foundational work on cryptography, to IPRI and the Trust:Data Consortium, which has created tools and architectures that foster the development of a secure internet-based network of trusted data.
Member companies stressed the benefits they see in being part of this initiative as not only helping navigate a changing policy landscape but also developing technical tools to help manage the new policies, laws, and regulations more efficiently. Speaking at the launch were MassMutual’s Head of Data Adam Fox, Capital One’s Machine Learning Research Director Bayan Bruss, and American Family Insurance’s Enterprise Chief Data Officer Brad Burke.
Companies interested in participating in the new initiative can visit the CSAIL site for more information.
Since its invention several millennia ago, concrete has become instrumental to the advancement of civilization, finding use in countless construction applications — from bridges to buildings. And yet, despite centuries of innovation, its function has remained primarily structural.
A multiyear effort by MIT Concrete Sustainability Hub (CSHub) researchers, in collaboration with the French National Center for Scientific Research (CNRS), has aimed to change that. Their collaboration promises to make concrete more sustainable by adding novel functionalities — namely, electron conductivity. Electron conductivity would permit the use of concrete for a variety of new applications, ranging from self-heating to energy storage.
Their approach relies on the controlled introduction of highly conductive nanocarbon materials into the cement mixture. In a paper in Physical Review Materials, they validate this approach while presenting the parameters that dictate the conductivity of the material.
Nancy Soliman, the paper’s lead author and a postdoc at the MIT CSHub, believes that this research has the potential to add an entirely new dimension to what is already a popular construction material.
“This is a first-order model of the conductive cement,” she explains. “And it will bring [the knowledge] needed to encourage the scale-up of these kinds of [multifunctional] materials.”
From the nanoscale to the state-of-the-art
Over the past several decades, nanocarbon materials have proliferated due to their unique combination of properties, chief among them conductivity. Scientists and engineers have previously proposed the development of materials that can impart conductivity to cement and concrete if incorporated within.
For this new work, Soliman wanted to ensure the nanocarbon material they selected was affordable enough to be produced at scale. She and her colleagues settled on nanocarbon black — a cheap carbon material with excellent conductivity. They found that their predictions of conductivity were borne out.
“Concrete is naturally an insulative material,” says Soliman, “But when we add nanocarbon black particles, it moves from being an insulator to a conductive material.”
By incorporating nanocarbon black at just a 4 percent volume of their mixtures, Soliman and her colleagues found that they could reach the percolation threshold, the point at which their samples could carry a current.
They noticed that this current also had an interesting upshot: It could generate heat. This is due to what’s known as the Joule effect.
“Joule heating (or resistive heating) is caused by interactions between the moving electrons and atoms in the conductor, explains Nicolas Chanut, a co-author on the paper and a postdoc at MIT CSHub. “The accelerated electrons in the electric field exchange kinetic energy each time they collide with an atom, inducing vibration of the atoms in the lattice, which manifests as heat and a rise of temperature in the material.”
In their experiments, they found that even a small voltage — as low as 5 volts — could increase the surface temperatures of their samples (approximately 5 cm3 in size) up to 41 degrees Celsius (around 100 degrees Fahrenheit). While a standard water heater might reach comparable temperatures, it’s important to consider how this material would be implemented when compared to conventional heating strategies.
“This technology could be ideal for radiant indoor floor heating,” explains Chanut. “Usually, indoor radiant heating is done by circulating heated water in pipes that run below the floor. But this system can be challenging to construct and maintain. When the cement itself becomes a heating element, however, the heating system becomes simpler to install and more reliable. Additionally, the cement offers more homogenous heat distribution due to the very good dispersion of the nanoparticles in the material.”
Nanocarbon cement could have various applications outdoors, as well. Chanut and Soliman believe that if implemented in concrete pavements, nanocarbon cement could mitigate durability, sustainability, and safety concerns. Much of those concerns stem from the use of salt for de-icing.
“In North America, we see lots of snow. To remove this snow from our roads requires the use of de-icing salts, which can damage the concrete, and contaminate groundwater,” notes Soliman. The heavy-duty trucks used to salt roads are also both heavy emitters and expensive to run.
By enabling radiant heating in pavements, nanocarbon cement could be used to de-ice pavements without road salt, potentially saving millions of dollars in repair and operations costs while remedying safety and environmental concerns. In certain applications where maintaining exceptional pavement conditions is paramount — such as airport runways — this technology could prove particularly advantageous.
While this state-of-the-art cement offers elegant solutions to an array of problems, achieving multifunctionality posed a variety of technical challenges. For instance, without a way to align the nanoparticles into a functioning circuit — known as the volumetric wiring — within the cement, their conductivity would be impossible to exploit. To ensure an ideal volumetric wiring, researchers investigated a property known as tortuosity.
“Tortuosity is a concept we introduced by analogy from the field of diffusion,” explains Franz-Josef Ulm, a leader and co-author on the paper, a professor in the MIT Department of Civil and Environmental Engineering, and the faculty advisor at CSHub. “In the past, it has described how ions flow. In this work, we use it to describe the flow of electrons through the volumetric wire.”
Ulm explains tortuosity with the example of a car traveling between two points in a city. While the distance between those two points as the crow flies might be two miles, the actual distance driven could be greater due to the circuity of the streets.
The same is true for the electrons traveling through cement. The path they must take within the sample is always longer than the length of the sample itself. The degree to which that path is longer is the tortuosity.
Achieving the optimal tortuosity means balancing the quantity and dispersion of carbon. If the carbon is too heavily dispersed, the volumetric wiring will become sparse, leading to high tortuosity. Similarly, without enough carbon in the sample, the tortuosity will be too great to form a direct, efficient wiring with high conductivity.
Even adding large amounts of carbon could prove counterproductive. At a certain point conductivity will cease to improve and, in theory, would only increase costs if implemented at scale. As a result of these intricacies, they sought to optimize their mixes.
“We found that by fine-tuning the volume of carbon we can reach a tortuosity value of 2,” says Ulm. “This means the path the electrons take is only twice the length of the sample.”
Quantifying such properties was vital to Ulm and his colleagues. The goal of their recent paper was not just to prove that multifunctional cement was possible, but that it was also viable for mass production.
“The key point is that in order for an engineer to pick up things, they need a quantitative model,” explains Ulm. “Before you mix materials together, you want to be able to expect certain repeatable properties. That’s exactly what this paper outlines; it separates what is due to boundary conditions — [extraneous] environmental conditions — from really what is due to the fundamental mechanisms within the material.”
By isolating and quantifying these mechanisms, Soliman, Chanut, and Ulm hope to provide engineers with exactly what they need to implement multifunctional cement on a broader scale. The path they’ve charted is a promising one — and, thanks to their work, shouldn’t prove too tortuous.
The research was supported through the Concrete Sustainability Hub by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.
When cancer cells metastasize, they often travel in the bloodstream to a remote tissue or organ, where they then escape by squeezing through the blood vessel wall and entering the site of metastasis. A study from MIT now shows that tumor cells become much softer as they undergo this process.
The findings suggest that drugs that prevent cells from softening could potentially slow or halt metastasis. Metastatic tumors are estimated to be present in about 90 percent of patients who die of cancer.
“We have long thought that if we could identify the barriers that a cancer cell has to overcome to form a metastatic tumor, that new drugs could be found and lives could be saved,” says Roger Kamm, the Cecil and Ida Green Distinguished Professor of Biological and Mechanical Engineering and an author of the study.
MIT graduate student Anya Roberts is the lead author of the paper, which appears today in the Journal of Biomechanics. Giuliano Scarcelli, an associate professor of bioengineering at the University of Maryland, is the senior author. Other MIT authors include Peter So, a professor of mechanical engineering and biological engineering, and Vijay Raj Singh, a research scientist in the Department of Mechanical Engineering.
After tumor cells enter the blood circulation, they get transported to another location in the body where they can then undergo a process called transendothelial migration. This occurs when cells squeeze between two neighboring endothelial cells (the cells that make up blood vessels), enter the tissue, and begin to multiply. In 2013, Kamm’s lab was first able to explore this process using a microscopic model of the blood capillaries that allowed them to image cancer cells making their way through a blood vessel wall into the surrounding extracellular matrix.
That study and the new paper are both part of an ongoing effort at MIT and elsewhere to study the physical changes that occur in cancer cells when they metastasize. In the new work, the MIT and University of Maryland researchers set out to test their hypothesis that cells become softer during transendothelial migration, making it easier for them to squeeze through small gaps between the endothelial cells.
To explore that possibility, the researchers created a 3D tissue model of the lining of a blood vessel. The model contains a layer of endothelial cells on top of a collagen gel layer that simulates the extracellular matrix. The researchers placed three different types of aggressive, metastatic tumor cells — lung cancer cells, breast cancer cells, and melanoma cells — on the endothelial layer, and measured the cells’ mechanical properties as they passed through the lining.
Many of the existing techniques for measuring cell stiffness, including atomic force microscopy, require physical contact with the cells, which can alter the cells’ mechanical properties. To avoid that kind of interference, the researchers decided to use two optical techniques, which don’t require any contact with the cells being studied and also enable measurements of the nucleus, the stiffest part of the cell interior.
The first of these optical techniques, known as Brillouin confocal microscopy, can reveal how the mechanical properties of a cell change over time in a 3D environment. This technique measures how light scatters when it interacts with density fluctuations within a material, which correlate with the stiffness of the material.
The second technique, known as confocal reflectance quantitative phase microscopy, measures thermal fluctuations of the cell membrane and the nuclear membrane. Softer membranes have larger fluctuations, while stiffer membranes have smaller fluctuations.
Using these two techniques, the researchers found that all of the cancer cell types that they studied became significantly softer as they passed through the wall of the simulated blood vessel. Overall, the lung, skin, and breast cancer cells softened by 30, 20, and 20 percent, respectively. The nuclei of these cells softened by 32, 21, and 25 percent, respectively.
This softening began two to three hours after the cells began their migration (a process also known as extravasation), and the cells were still soft when measured 24 hours later. The researchers suspect that these softened cells may also differ biologically from the cells in the original tumor in ways that make them resistant to conventional chemotherapies.
“This softening may enable these tumor cells to survive, migrate farther into a new tissue location, and build up a secondary metastasis site,” Roberts says.
It remains unknown what causes the cells to become softer, but the researchers suspect that it may be caused by changes to the structure of chromatin, consisting of DNA and proteins, which is located in the nucleus.
The researchers hope that their work could lead to the development of new drugs that can interfere with cell softening and thus disrupt metastasis.
“There is a chance that some chemotherapeutics may change the nuclear mechanical properties of tumor cells, and soften them,” Roberts says. “It’s a challenging problem, but it’s worth working on. If one could selectively stiffen tumor cells, that might inhibit the formation of a metastasis.”
The research was funded by the National Cancer Institute and the National Science Foundation.
Kiara Wahnschafft started her first company at age 16. After her classmate passed away from a drunk driving accident, Wahnschafft couldn’t stop thinking about ways technology could have saved a life. With two other students, she built a prototype for a car key that works only after the driver passes a breathalyzer test. Wahnschafft went on to create a company called SafeStart Technologies, ultimately patenting the product and winning several competitions.
The experience was Wahnschafft’s introduction to a unique way in which she could improve the lives of those around her. “I was always looking for an artistic outlet as a kid,” she says. “When I discovered programming, it was like I finally had this blank canvas on which to freely create potentially meaningful solutions.”
Wahnschafft arrived at MIT with the desire to continue pursuing product engineering for social entrepreneurship. She experimented with mechanical engineering classes through MIT D-Lab, a program focused on equitable design and development, and soon found herself surrounded by startups working to alleviate poverty and improve living standards around the world. One company, called Sanergy, stood out to her for its innovative approach toward improving sanitation in urban settlements. Through a PKG Center fellowship, she traveled to Nairobi, Kenya, and interned at Sanergy during Independent Activities Period (IAP) in January, 2020.
On her first day, Wahnschafft went with co-workers to the settlements where the company’s sanitation units were being built. Seeing the systems and meeting those operating them in person, as well as speaking with new co-workers and friends who had grown up in Nairobi, gave her a much deeper understanding of the challenge. While her engineering work focused on improving sanitation conditions, she learned more about the systemic reasons why settlements were expansive in the first place.
One of these such issues was job instability. Upon returning to MIT, Wahnschafft dove into an economics research opportunity focused on evaluating a program that teaches Kenyan workers skills needed for digital work. The findings revealed that the program helped to improve wages, employment, and life satisfaction. Wahnschafft then shared her findings with the program’s managers, providing them quantitative reasons to expand their work. The experience introduced her to an evidence-based method for tackling societal challenges.
Today, Wahnschafft is a junior studying both mechanical engineering and economics. In her career, she aims to help solve what she deems the greatest global challenge of our time: the climate crisis. In learning about and working on the energy transition, Wahnschafft often finds herself leveraging her two disciplines together. For example, she notes, “if we’re proposing the installation of heat pumps, it’s helpful to understand both the technical justification for their energy efficiency and the economic policies required for their widescale adoption.”
As a researcher in the MIT Environmental Solutions Initiative Rapid Response Group and the MIT Sloan Climate Pathways Project, Wahnschafft has written multiple briefs to inform Massachusetts and federal policymakers, often utilizing MIT climate research to do so. Both now and in the future, her goal is to ensure climate policy is backed by scientific evidence.
Wahnschafft has also collaborated with the student body and leaders in the administration to improve MIT. As the chief of staff of the Undergraduate Association (UA), the undergraduate student government, she has focused on pulling the student voice into Institute decisions in this unique year, particularly in the area of climate change. She worked with a large group of students, interviewing faculty and other stakeholders in the process, to develop recommendations for climate action at MIT, and is now working with the Institute’s administration to incorporate some of these ideas into MIT’s Plan for Action on Climate Change.
At a forum about MIT’s Climate Action Plan, Wahnschafft spoke on a panel focused on MIT’s role in the energy transition, and proposed ideas on ways to coordinate the wealth of climate research on campus. After working for a few different MIT climate-focused research centers, she has seen how “MIT has all this amazing research, but it’s often in silos.” After conversations with many faculty and students, she believes that MIT can “exponentially increase its impact” by connecting researchers with each other and with opportunities to influence climate policy.
Effective communication is also the theme of Wahnschafft’s favorite class, 11.011 (The Art and Science of Negotiation), for which she has served as a teaching assistant. She believes that the course should be an essential part of any MIT student’s curriculum. “I used to think negotiating meant sitting down at the bargaining table to haggle over prices,” she says. “Through the class, you learn that negotiation is so much more: It is practicing empathy and finding common ground. Especially in our polarized country, and especially on issues like climate that are so cross-cutting, we need to open up conversations to reach some mutual understanding.”
Wahnschafft plans on putting her negotiation skills to the test this summer, when she will be interning in Washington through the MIT Washington Summer Internship Program. She hopes to continue working on climate issues that sit at the intersection of evidence and policy. She feels “It’s going to take time to solve the climate crisis. But my everyday focus will be thinking about if the decisions I’m making are always socially and ethically responsible,” says Wahnschafft.
“I think that as MIT students, we need to be very thoughtful with where we choose to dedicate our minds. I know so many of my peers will go on to become incredible leaders in all types of important organizations,” she says. “We so often have such incredible opportunities at our fingertips during and after our time at MIT, and that’s amazing. So, we can and should be intentional with which of these we pursue and in the decisions we make as leaders, always considering the implications for our diverse local and global communities.”
Wahnschafft applies the same principles when looking the the future. “I’ve had the most incredible education and am very often thinking about where I can best apply it to make this world a little better. Applying my education to help combat climate change, one of the greatest global challenges in history, is the way in which I hope to make a difference.”
Ambitious goals are often called moonshots, but the challenge of addressing climate change will be even more monumental. This “Earthshot,” as MIT President L. Rafael Reif calls it in an op-ed published today in The Boston Globe, is an enormously complex problem with no single right answer, no clear finish line, multiple stakeholders with conflicting priorities, and no central authority empowered to solve it.
The "super wicked problem" of bringing the global economy to net-zero carbon and adapting to aspects of climate change we can’t prevent will require sustained contributions from every corner of industry, government, academia, philanthropy, and every individual, Reif writes.
To get there, he argues for pursuing two tracks at once. “On path one, we must go as far as we can, as fast as we can, with the tools we have now. And by tools, I mean not only science and technology, but also policy, infrastructure, behavioral and cultural changes, and more,” he writes.
“But the fact is,” he adds, “current technology alone will not get us to the 2050 target.”
Reif thus proposes a path two, involving the creation and deployment of new tools, including science and technology breakthroughs, to approach the many parts of the climate change problem, including aviation, supply chains, agriculture, environmental justice, jobs, and much more.
To meet this path two challenge, he writes, research universities have a special role: “to spawn ideas that meet the needs of different sectors, and to optimize a system for speeding the mind-to-lab-to-market flow of technological answers, while helping to shape policies and processes to support adoption at scale.”
For instance, MIT has launched the MIT Climate Grand Challenges, which has led to novel proposals, from capturing carbon dioxide by domesticating fast-growing microbes to developing plasma-assisted technologies as enablers of green aviation.
Universities can help “tough tech” ideas like these reach the market by creating specialized accelerators. In the MIT ecosystem, The Engine identifies entrepreneurs with bold new-science answers to deep societal problems and connects them with impact investors.
The Institute is also working to create an innovation marketplace based on collaboration, not competition. For example, the member companies of the new MIT Climate and Sustainability Consortium are working with MIT researchers and with each other to speed the creation, testing, and deployment of practical climate solutions within their production processes, supply chains, and service models.
“With a super wicked problem, nobody has all the answers. But if individuals and institutions in every part of the economy and society tackle the pieces of the problem within their reach and collaborate with each other, we have a real shot — an Earthshot — at preserving a habitable world,” Reif writes.
No community has been immune to the hardships of the last year. But there have been bright spots. In response to the global pandemic, scientists developed effective vaccines. In response to tragic killings, people have begun important conversations about racial injustice and equity. In response to social isolation, we’ve adopted new digital platforms and community-building initiatives.
In many ways, Kendall Square has been a microcosm for those responses. The community has leaned into issues around inclusion, striven to support struggling businesses, and developed programs to overcome the loneliness and anxiety that have followed sudden changes in life and work. Of course, Kendall Square has also been home to some of the vaccine development work critical for ending the global pandemic.
Those efforts gave the 13th annual meeting of the Kendall Square Association (KSA) a decidedly positive tone. This year’s event was titled “The New Kendall Challenge” in a nod to the transformed landscape for work and life we find ourselves in. Even as speakers acknowledged the tragedies of the last 12 months and the tough work ahead, they also saw signs of positive change.
Chief among the optimists was MIT president and keynote speaker L. Rafael Reif.
“I think it is safe to say that, when we all come back to Kendall, things will be different,” Reif said in his address. “But I have no doubt that Kendall will come back better than ever.”
Reif also expressed gratitude that companies with such a large a presence in Kendall Square, including Moderna, Pfizer, and Johnson and Johnson, were playing such a central role in vaccine efforts.
“To the general public, the vaccines seemed to come out of nowhere — an overnight success,” Reif said. “But of course, the ‘miracle’ of these vaccines was the fruit of about four decades of fundamental university science and applied industry research.”
Another speaker striking a positive tone was KSA President C.A. Webb. Webb said that while the KSA has been around for 14 years, the past 12 months has made the association rethink its role in the community.
“The KSA community has always been critical, but nothing like a global pandemic to take it from a nice-to-have to a must-have,” Webb said.
In the first few weeks of the pandemic, KSA brought together different segments of its community for discussion, including restaurant owners, human resources leaders, and facility operators. They quickly learned that everyone was grappling with similar problems.
In response to the value members were getting from those meetings, KSA launched the “Future of (how we) Work” task force to help answer questions about how the ecosystem will return to work after Covid-19.
Reif noted KSA’s task force is just one of several initiatives that show the alignment between KSA and MIT. Another is KSA’s Inclusion Drives Innovation program, which complements MIT’s Strategic Action Plan for Diversity, Equity, and Inclusion.
Equity and inclusion were the focus of another conversation at the event, between Bill Sibold, the executive vice president and head of Sanofi Genzyme, and Tanisha Sullivan, the executive advisor to the president of Sanofi Genzyme and the Boston chapter president of the NAACP.
Sullivan first entered the life sciences industry 25 years ago with Johnson and Johnson. In that time, she said, she’s been inspired in her NAACP work by her biotech colleagues, who are tackling some of life’s most complex and persistent challenges.
Sullivan said the conversations around racial injustice in the last year have been hugely important for the country.
“We are making progress,” Sullivan said. “In many respects, the tragedies of the last year heightened our collective awareness about race and racism in our country. In the last year, we haven’t been afraid to ask the question, ‘Why?’ Why are we here? Why is this happening? That has led to a deeper understanding of racial inequality in our country and it’s led to more of a commitment on the part of some to do something meaningful about it.”
Sullivan and Sibold also acknowledged the country is still struggling to decide how to address issues like police accountability, educational equity, and voting access.
“Anything we did in the last year wasn’t good enough, and now we’re in a new trajectory that we all have to move in,” Sibold said.
Other speakers at the virtual event included Cambridge Mayor Sumbul Siddiqui, who thanked the Kendall community for raising money to provide bulk regional testing and Covid-19 vaccine trials, and Broad Institute Chief Communications Officer and KSA board chair Lee McGuire, who described a collaboration involving Cambridge-based groups to provide door-to-door Covid testing at every Cambridge nursing home in one weekend.
Overall, the event’s speakers believe Kendall Square’s community will help drive each of its members’ success moving forward. They also said the community is just getting started.
During a Q&A session with Reif after his keynote, Webb asked what inning he thought Kendall Square was in. Reif said the third.
The answer underscored the fact that there’s a lot of work to do and progress to be made in the area. It also alluded to the continued potential of what has been called the most innovative square mile on the planet.
“I would like Kendall Square to be a place that never forgets the source of its strength: a great system of mutual inspiration, support, and collaboration, stretching from fundamental science all the way to practical impact — a system that embraces absolutely everyone in this room today,” Reif said.
The following letter was sent to the MIT community today by President L. Rafael Reif.
To the members of the MIT community,
On Monday, the trial of Derek Chauvin will move to closing arguments, amid continued anguish across the country over systemic racism and violence against people of color.
I write now to acknowledge the pain, frustration, exhaustion and profound weight of this moment, to recognize the impact across our community, especially for people of color, and to offer a few observations and resources.
To our faculty and to leaders at every level across MIT: You can make an important difference if, at the start of a class or a meeting, you simply acknowledge what is happening in our society and, if possible, create time and space to talk about it. We have already heard from many appreciative students about the acknowledgments and small changes faculty members have made ahead of the upcoming long weekend.
In this time of societal upheaval and heightened anxiety, I hope we can all look out for one another with care and compassion. You may rely on peers and others close to you for comfort. But it is also very natural to seek professional support. As ever, MIT offers extensive resources, both for MIT employees and for students. (After-hours and 24/7 resources will be available for students and employees over the long weekend that begins today.)
To all of you, and especially to our students: In this highly charged moment, I recognize that, depending on how events unfold in the coming days, you may feel moved to gather with others, whether in protest or just for mutual support. We offer these guidelines to help you keep yourself and others safe – and Covid safe.
As I write, CP* (the wonderful, Covid-era online version of Campus Preview Weekend) is already underway. As we give next year’s incoming students a preview of what makes our community special, I hope we can show them how we care for one another at MIT.
L. Rafael Reif
MIT Career Advising and Professional Development (CAPD) has a simple mission statement: engaging undergraduates, graduate students, alumni, and postdocs in self-discovery to craft lives that are intellectually challenging, personally enriching, and of service to the world. With increased career uncertainty due to the pandemic, Deborah Liverman, executive director for CAPD, shares how the office can help students navigate a path to fulfilling work.
Q: How is CAPD supporting students now that recruitment for employment and graduate school is virtual?
A: As the world shifted to virtual operations, we pivoted to support our students in a brand-new digital landscape. Soon after campus closed in March 2020, CAPD mobilized staff to consider the rising needs for students across levels and programs. We anticipated increased support for those who had been negatively affected by the pandemic with limited summer opportunities, rescinded jobs, delayed job offers, and layoffs. To meet this need, we created new resources and found additional ways to identify students and graduates who needed career assistance. Simultaneously, we needed to help those students adjust to virtual interviewing.
It quickly became clear that incorporating more technology was a necessity, both for our office programming and for students in recruitment processes. To help students navigate these learning curves, we developed COVID-19 FAQs to address career concerns — including how to select a graduate school without a site visit — and resources to help report GPA for Covid-affected semesters, ace a virtual interview, and succeed in a virtual career fair. In addition, advising appointments went long-distance through video with extended hours to serve students in different time zones.
The office continues to plan virtual workshops, employer events, and career fairs, too, such as the Ivy+ Just in Time Career Fair, a collaboration with the Institute’s peer schools.
Q: With the uncertainty of the labor market, what advice do you have for students?
A: Students have tackled difficulties and disappointments over the last year, compounding the stress inherent in career planning. First and foremost, we encourage students to take care of themselves mentally and physically, and to reach out to our office for any career support they need — we care about their well-being as well as their work.
In this uncertain market, it’s smart to think about virtual and remote career exploration. There are thousands of internship and job listings on Handshake, and students can also consider how to bolster their skill set through classes on Coursera, edX, and LinkedIn Learning; independent projects; volunteering; or exploring potential graduate programs.
Flexibility, the ability to pivot, and resiliency can all be useful when navigating an uncertain labor market; however, there are positive indicators for an improved job market. The National Association of College and Employers’ Job Outlook 2021 Spring Update projects that hiring will rebound and employers will hire 7.2 percent more new college graduates from the Class of 2021 than they hired from the Class of 2020.
In this competitive market, be ready to apply to opportunities quickly. Check that your resume is ready, your LinkedIn profile is up-to-date, and that you’ve brushed up on your interviewing skills. Additionally, networking can help you discover more about career paths and, potentially, learn about hiring processes, including unlisted or upcoming job opportunities. MIT has a strong alumni network, many of whom are easy to connect with through the Alumni Association’s Alumni Advisor’s Hub. This platform can connect students with alumni for general career chats or more targeted conversations, such as coding or case interview practice.
Whether your plans have unexpectedly changed, you aren’t sure where to start, or you’re in the middle of conducting a job search, you are not alone. CAPD is here to help, and there is no formal advance preparation needed to book an appointment with a team member. For those who are graduating soon and working on their plans, we are eager to stay connected and offer our support, which doesn’t expire when you receive your degree. Our services are available to alumni up to two years after graduation.
Q. What advice do you have for students considering a job in academia or applying to graduate/professional school?
In academia, the increase in application rates for graduate and professional schools have made acceptances competitive. And for graduate students and postdocs seeking faculty positions, there are lessened opportunities due to flat or decreased university budgets. The competitiveness and dearth of opportunities reflect what is going on in the economy and, unfortunately, that means some graduates may not be able to pursue their first career choice this year.
If you’re unable to pursue your ideal plans, we encourage you to meet with one of our advisors so that we can support you in preparing to reapply and in making alternate plans. In addition to individual Zoom and phone appointments, which can be booked through Handshake, we also host virtual events to help students explore options. For example, we have invited guest speaker Lauren Celano to present seminars on April 28, including a General Overview of Career Opportunities for Graduate Students and Postdocs and Career Search Strategies: How to Identify Opportunities and Best Practices for Job Searching.
We know that this may be a disappointing or stressful period for many students, and that it can be difficult to determine next steps. As you move towards reapplication and consider alternate plans, our staff is here for you every step of the way.
MIT students Yu Jing Chen and Max Williamson have been selected as 2021 Truman Scholars. Truman Scholars demonstrate outstanding leadership potential, a commitment to a career in government or the nonprofit sector, and academic excellence.
Chen and Williamson join 62 other scholars who were selected from 845 candidates. MIT President L. Rafael Reif personally informed both students that they were selected for this competitive scholarship, telling them, “This year it [Truman Scholarship] was so competitive and this is so important, and we are so very very proud of you.”
Established by Congress in 1975 as a living memorial to President Harry S. Truman and a national monument to public service, the Truman Scholarship carries the legacy of our 33rd president by supporting and inspiring the next generation of public service leaders. For more than 40 years, the Truman Foundation has fulfilled that mission — inspiring and supporting Americans from diverse backgrounds to public service. Truman Scholars receive up to $30,000 toward graduate studies in the United States, as well as access to development programs such as the Truman Scholars Leadership Week, the Summer Institute, and other Truman Fellows events.
Chen and Williamson were supported by MIT’s Distinguished Fellowships team in Career Advising and Professional Development and MIT’s Truman Selection Committee. Kimberly Benard, assistant dean of Distinguished Fellowships, says, “While MIT has supported many finalists, Yu Jing and Max are MIT’s first winners in nine years. This is a testament to their civic mindedness, continuous service, and strong leadership skills. Both Yu Jing and Max are inspirational leaders who have transformed conversations in their respective communities, and we are proud that they will represent MIT as Truman Scholars.”
Chen, who hails from Chicago, is majoring in urban studies and planning with computer science. She is an active participant in MIT’s first-year Concourse community. Professor Anne McCants, the director of the program, states, “Yu Jing combines her deep curiosity about the world with her unusual capacity to listen attentively and nudge productively, all in the service of building community wherever she goes. In her first year with us in Concourse, we had the pleasure of watching her develop her own intellectual and social voice, at the same time that she helped other students to flourish.”
In the spring of her first year at MIT, Chen founded the MIT Asian American Initiative, which seeks to advocate on behalf of Asian Americans and increase civic engagement. Chen spearheaded efforts to create two murals, one that honors International Migrant Day and another that documents the Asian American experience, and also supported a magazine that is a collection of stories, poems, and art of individual immigration stories. In addition, she became a member of the First Generation and/or Low Income Working Group comprised of students, staff, and faculty looking to improve support for vulnerable students. Her activism on campus inspired her to run as vice president for the Undergraduate Association, a position she now holds. After Covid-19 hit, Chen rose to the occasion, advocating for funding for low-income students and the pass/fail-no record grading policy.
During summer 2019, Chen was accepted to the MIT International Science and Technology Initiatives (MISTI) program to India to work with the Information Retrieval and Extraction Lab at the International Institute of Information Technology Hyderabad. While there, she tested machine learning and natural language processing methods to combat online toxicity. Chen has also conducted research with the MIT Civic Data Design Lab, which applies data to improving the lives of citizens. Chen was tasked with scraping data from The New York Times to uncover the most pertinent concerns of city residents, which was then presented to New York City Council candidates.
This past summer, Chen did a remote internship with the City of Miami Beach Mayor’s Office as a Covid-19 Recovery Intern. She interviewed business owners and homeowner associations to understand the efficacy of their Covid-19 policies, and discover the challenges the city confronted with curbing the spread. She was also able to analyze and create visualizations of Florida’s Covid-19 data to inform city policy recommendations.
Williamson, who hails from Wilmington, Delaware, is an electrical engineering and computer science major and public policy minor. Williamson plans to use the Truman Scholarship to attend law school before embarking upon a career in government service.
Williamson accepted an internship with U.S. Senator Chris Coons from Delaware the summer after his first year, primarily drafting technology policy memos and performing constituent advocacy. At MIT, Williamson founded Engineers for Biden, MIT’s chapter of Students for Biden (and was also a fellow with Massachusetts for Biden), before the operation became fully remote due to Covid-19. Williamson quickly changed course, and established partnerships with regional schools, organized phone banks, and created roundtables with prominent Biden supporters. He even turned his garage in Delaware into a Biden sign-making workshop, creating large plywood signs to display.
Last fall, Williamson decided to take a leave from MIT to dedicate himself full time to Coons’ reelection efforts, and was offered the job of campaign data director. In this role, Williamson led the development of logistical software solutions and electorate analysis. After Coons’ successful reelection, Williamson was recruited to help New Castle, Delaware, launch a high-throughput genomics and Covid-test lab at Delaware State University with $6.5 million from CARES Act grants.
Williamson is a founding member and contributor to MIT’s Political Review, a student-run digital magazine for political thought. Williamson has also been part of MIT’s heavyweight rowing team, and through MIT Global Teaching Labs he designed and taught a comprehensive computer science curriculum in Italy.
Recognizing the importance of education and access, Williamson applied for a Priscilla King Gray Public Service Fellowship to pilot a preparatory computer science curriculum in collaboration with a Wilmington community center. Assistant Dean Alison Hynd from the PKG Center says, “Max has a real passion for his home state of Delaware and is using his MIT education to help the state's residents meet the opportunities and challenges of its new economic engines. With his PKG fellowship, he partnered with a local community center, and Max hopes this curriculum will serve hundreds of students in the coming years and serve as a pipeline between Wilmington’s underserved communities and its burgeoning tech industry. In the past year, he has also done extraordinary work to enable high-volume, low-cost Covid testing for the state. I fully expect to see Max representing Delaware in Congress in the future!"
Whatever business a company may be in, software plays an increasingly vital role, from managing inventory to interfacing with customers. Software developers, as a result, are in greater demand than ever, and that’s driving the push to automate some of the easier tasks that take up their time.
Productivity tools like Eclipse and Visual Studio suggest snippets of code that developers can easily drop into their work as they write. These automated features are powered by sophisticated language models that have learned to read and write computer code after absorbing thousands of examples. But like other deep learning models trained on big datasets without explicit instructions, language models designed for code-processing have baked-in vulnerabilities.
“Unless you’re really careful, a hacker can subtly manipulate inputs to these models to make them predict anything,” says Shashank Srikant, a graduate student in MIT’s Department of Electrical Engineering and Computer Science. “We’re trying to study and prevent that.”
In a new paper, Srikant and the MIT-IBM Watson AI Lab unveil an automated method for finding weaknesses in code-processing models, and retraining them to be more resilient against attacks. It’s part of a broader effort by MIT researcher Una-May O’Reilly and IBM-affiliated researcher Sijia Liu to harness AI to make automated programming tools smarter and more secure. The team will present its results next month at the International Conference on Learning Representations.
A machine capable of programming itself once seemed like science fiction. But an exponential rise in computing power, advances in natural language processing, and a glut of free code on the internet have made it possible to automate at least some aspects of software design.
Trained on GitHub and other program-sharing websites, code-processing models learn to generate programs just as other language models learn to write news stories or poetry. This allows them to act as a smart assistant, predicting what software developers will do next, and offering an assist. They might suggest programs that fit the task at hand, or generate program summaries to document how the software works. Code-processing models can also be trained to find and fix bugs. But despite their potential to boost productivity and improve software quality, they pose security risks that researchers are just starting to uncover.
Srikant and his colleagues have found that code-processing models can be deceived simply by renaming a variable, inserting a bogus print statement, or introducing other cosmetic operations into programs the model tries to process. These subtly altered programs function normally, but dupe the model into processing them incorrectly, rendering the wrong decision.
The mistakes can have serious consequences for code-processing models of all types. A malware-detection model might be tricked into mistaking a malicious program for benign. A code-completion model might be duped into offering wrong or malicious suggestions. In both cases, viruses may sneak by the unsuspecting programmer. A similar problem plagues computer vision models: Edit a few key pixels in an input image and the model can confuse pigs for planes, and turtles for rifles, as other MIT research has shown.
Like the best language models, code-processing models have one crucial flaw: They’re experts on the statistical relationships among words and phrases, but only vaguely grasp their true meaning. OpenAI’s GPT-3 language model, for example, can write prose that veers from eloquent to nonsensical, but only a human reader can tell the difference.
Code-processing models are no different. “If they’re really learning intrinsic properties of the program, then it should be hard to fool them,” says Srikant. “But they’re not. They’re currently relatively easy to deceive.”
In the paper, the researchers propose a framework for automatically altering programs to expose weak points in the models processing them. It solves a two-part optimization problem; an algorithm identifies sites in a program where adding or replacing text causes the model to make the biggest errors. It also identifies what kinds of edits pose the greatest threat.
What the framework reveals, the researchers say, is just how brittle some models are. Their text summarization model failed a third of the time when a single edit was made to a program; it failed more than half of the time when five edits were made, they report. On the flip side, they show that the model is able to learn from its mistakes, and in the process potentially gain a deeper understanding of programming.
“Our framework for attacking the model, and retraining it on those particular exploits, could potentially help code-processing models get a better grasp of the program’s intent,” says Liu, co-senior author of the study. “That’s an exciting direction waiting to be explored.”
In the background, a larger question remains: what exactly are these black-box deep-learning models learning? “Do they reason about code the way humans do, and if not, how can we make them?” says O’Reilly. “That's the grand challenge ahead for us.”
Two MIT professors have proposed a new approach to estimating the risks of exposure to Covid-19 under different indoor settings. The guideline they developed suggests a limit for exposure time, based on the number of people, the size of the space, the kinds of activity, whether masks are worn, and the ventilation and filtration rates. Their model offers a detailed, physics-based guideline for policymakers, businesses, schools, and individuals trying to gauge their own risks.
The guideline, appearing this week in the journal PNAS, was developed by Martin Z. Bazant, professor of chemical engineering and applied mathematics, and John W. M. Bush, professor of applied mathematics. They stress that one key feature of their model, which has received less attention in existing public-health policies, is providing a specific limit for the amount of time a person spends in a given setting.
Their analysis is based on the fact that in enclosed spaces, tiny airborne pathogen-bearing droplets emitted by people as they talk, cough, sneeze, sing, or eat will tend to float in the air for long periods and to be well-mixed throughout the space by air currents. There is now overwhelming evidence, they say, that such airborne transmission plays a major role in the spread of Covid-19. Bush says the study was initially motivated early last year by their concern that many decisions about policies were being guided primarily by the “6-foot rule,” which doesn’t adequately address airborne transmission in indoor spaces.
Using a strictly quantitative approach based on the best available data, the model produces an estimate of how long, on average, it would take for one person to become infected with the SARS-CoV-2 virus if an infected person entered the space, based on the key set of variables defining a given indoor situation. Rather than a simple yes or no answer about whether a given setting or activity is safe, it provides a guide as to just how long a person could safely expect to engage in that activity, whether it be a few minutes in a store, an hour in a restaurant, or several hours a day in an office or classroom, for example.
“As scientists, we’ve tried to be very thoughtful and only go with what we see as hard data,” Bazant says. “We’ve really tried to just stick to things we can carefully justify. We think our study is the most rigorous study of this type to date.” While new data are appearing every day, and many uncertainties remain about the SARS-CoV-2 virus’ transmission, he says, “We feel confident that we’ve made conservative choices at every point.”
Bush adds: “It’s a quickly moving field. We submit a paper and the next day a dozen relevant papers come out, so we scramble to incorporate them. It’s been like shooting at a moving target.” For example, while their model was initially based on the transmissibility of the original strain of SARS-CoV-2 from epidemiological data on the best characterized early spreading events, they have since added a transmissibility parameter, which can be adjusted to account for the higher spreading rates of the new emerging variants. This adjustment is based on how any new strain’s transmissibility compares to the original strain; for example, for the U.K. strain, which has been estimated to be 60 percent more transmissible than the original, this parameter would be set at 1.6.
One thing that’s clear, they say, is that simple rules, based on distance or capacity limits on certain types of businesses, don’t reflect the full picture of the risk in a given setting. In some cases that risk may be higher than those simple rules convey; in others it may be lower. To help people, whether policymakers or individuals, to make more comprehensive evaluations, the researchers teamed with app developer Kasim Khan to put together an open-access mobile app and website where users can enter specific details about a situation — size of the space, number of people, type of ventilation, type of activity, mask wearing, and the transmissibility factor for the predominant strain in the area at the time — and receive an estimate of how long it would take, under those circumstances, for one new person to catch the virus if an infected person enters the space.
The calculations were based on inferences made from various mass-spreading events, where detailed data were available about numbers of people and their age range, sizes of the enclosed spaces, kinds of activities (singing, eating, exercising, etc.), ventilation systems, mask wearing, the amount of time spent, and the resulting rates of infections. Events they studied included, for example, the Skagit Valley Chorale in Washington state, where 86 percent of the seniors present became infected at a two-hour choir practice
While their guideline is based on well-mixed air within a given space, the risk would be higher if someone is positioned directly within a focused jet of particles emitted by a sneeze or a shout, for example. But in general the assumption of well-mixed air indoors seems to be consistent with the data from actual spreading events, they say.
“When you look at this guideline for limiting cumulative exposure time, it takes in all of the parameters that you think should be there — the number of people, the time spent in the space, the volume of the space, the air conditioning rate and so on,” Bush says. “All of these things are kind of intuitive, but it’s nice to see them appear in a single equation.”
While the data on the crucial importance of airborne transmission has now become clear, Bazant says, public health organizations initially placed much more emphasis on handwashing and the cleaning of surfaces. Early in the pandemic, there was less appreciation for the importance of ventilation systems and the use of face masks, which can dramatically affect the safe levels of occupancy, he says.
“I’d like to use this work to establish the science of airborne transmission specifically for Covid-19, by just taking into account all factors, the available data, and the distribution of droplets for different kinds of activities,” Bazant says. He hopes the information will help people make informed decisions for their own lives: “If you understand the science, you can do things differently in your own home and your own business and your own school.”
Bush offers an example: “My mother is over 90 and lives in an elder care facility. Our model makes it clear that it’s useful to wear a mask and open a window — this is what you have in your control.” He was alarmed that his mother was planning to attend an exercise class in the facility, thinking it would be OK because people would be 6 feet apart. As the new study shows, because of the number of people and the activity level, that would actually be a highly risky activity, he says.
Already, since they made the app available in October, Bazant says, they have had about half a million users. Their feedback helped the researchers refine the model further, he says. And it has already helped to influence some decisions about reopening of businesses, he adds. For example, the owner of an indoor tennis facility in Washington state that had been shut down due to Covid restrictions says he was allowed to reopen in January, along with certain other low-occupancy sports facilities, based on an appeal he made based in large part on this guideline and on information from his participation in Bazant’s online course on the physics of Covid-19 transmission.
Bazant says that in addition to recommending guidelines for specific spaces, the new tools also provide a way to assess the relative merits of different intervention strategies. For example, they found that while improved ventilation systems and face mask use make a big difference, air filtration systems have a relatively smaller effect on disease spread. And their study can provide guidance on just how much ventilation is needed to reach a particular level of safety, he says.
“Bazant and Bush have provided a valuable tool for estimating (among other things) the upper limit on time spent sharing the air space with others,” says Howard Stone, a professor of mechanical and aerospace engineering at Princeton University who was not connected to this work. While such an analysis can only provide a rough estimate, he says the authors “describe this kind of order of magnitude of estimate as a means for helping others judge the situation they might be in and how to minimize their risk. This is particularly helpful since a detailed calculation for every possible space and set of parameters is not possible.”
A key portion of MIT’s campus overlaps with Kendall Square, the bustling area in East Cambridge where students, residents, and tech employees scurry around in between classes, meetings, and meals. Where are they all going? Is there a way to make sense of this daily flurry of foot traffic?
In fact, there is: MIT Associate Professor Andres Sevtsuk has made Kendall Square the basis of a newly published model of pedestrian movement that could help planners and developers better grasp the flow of foot traffic in all cities.
Sevtsuk’s work emphasizes the functionality of a neighborhood’s elements, above and beyond its physical form, making the model one that could be used from Cambridge to Cape Town.
“This model allows us to estimate how many pedestrian journeys are likely to occur,” Sevtsuk says. “It also forecasts trip distribution. That depends directly on what’s available around pedestrians and how many destinations they can access on foot.”
Sevtsuk’s model could help fill a void in urban planning. It is normal for a traffic impact assessment (TIA) report to be required for new developments, estimating automobile traffic that the project is likely to create. But there is no standard equivalent for pedestrian traffic, leaving most officials and planners who make decisions about urban projects with greater uncertainty.
In short, counting pedestrians can help pedestrians count.
“There’s this whole history of treating [automobile] traffic numerically,” says Sevtsuk, “Every road investment is accompanied by a cost-benefit analysis. But those benefits are geared to moving around in cars. The people around the table who have numbers are traffic engineers. Cities have great data on vehicular networks, but we know very little about sidewalks in most cities.”
The paper presenting the model, “Estimating Pedestrian Flows on Street Networks,” appears in the Journal of the American Planning Association. Sevtsuk, the sole author, is the Charles and Ann Spaulding Career Development Associate Professor of Urban Science and Planning at MIT, and director of MIT’s City Form Lab.
Bustling Main Street
The model itself draws upon network analysis, regarding most pedestrian trips as functional journeys between various origins and destinations: homes, offices, subway stops, restaurants, and other amenities. Sevtsuk placed a maximum radius on most trips and allowed for a “detour ratio” of up to 15 percent, meaning that pedestrian journeys in the model can go farther than the shortest path to get from one point to another.
To acquire data for the model, Sevtsuk obtained property-level data from the city and had observers count pedestrians on 60 street segments in Kendall Square for calibration purposes (before the Covid-19 pandemic) during two times of the day: a midday period from 12 to 2 p.m. and an evening rush-hour period from 4 to 8 p.m.
Among other things, Sevtsuk and his research assistants found that the average of these Kendall Square street segments had 872 pedestrians on it during the midday period, and 1,711 during the evening time slot. Kendall Square’s Main Street — which features a subway station, many MIT buildings, and offices for Google and Amazon, among other firms — averaged a neighborhood-high 11,311 pedestrian trips during the four-hour evening period.
“We have to have real-world data to calibrate the model, and that’s what puts us into a ballpark of accurate estimates on all streets,” Sevtsuk says. “The representative sample only has to occur on some streets. But once it’s calibrated on those 60 streets, the estimates are good to go for a whole lot more streets — hundreds or thousands of streets can be estimated.”
A tool for the drawing board, too
As Sevtsuk emphasizes, the model could be applied to almost any urban setting, and not just locations physically resembling Kendall Square. Given destinations to walk to and decent street conditions to connect them, people will walk around all kinds of neighborhoods.
“The estimates are not just comparatives from some similar places,” Sevtsuk says. “They are directly estimated trips from specific buildings to other specific buildings nearby, depending on their uses.”
Granted, the study did generate a number of particular observations about Kendall Square, where about 40 percent of workers and students appear to be walking to a lunch venue every day.
But more broadly, Sevtsuk emphasizes, the model could also be integrated into urban planning to help shape developments still on the drawing board in multiple ways — to estimate pedestrian flows, to help with zoning decisions, and to make sure retail frontages are in places with significant pedestrian flow, among other things.
“What’s particularly useful is it can be applied not just to existing areas like Kendall, but also newly planned places,” Sevtsuk notes. “Even if we were planning a new area, just by knowing what the built configuration of the future development will be, and what land uses it may contain, we can have an educated forecast about what the pedestrian flows will look like.”
Over the past decade, the CRISPR-Cas9 gene editing system has revolutionized genetic engineering, allowing scientists to make targeted changes to organisms’ DNA. While the system could potentially be useful in treating a variety of diseases, CRISPR-Cas9 editing involves cutting DNA strands, leading to permanent changes to the cell’s genetic material.
Now, in a paper published online in Cell on April 9, researchers describe a new gene editing technology called CRISPRoff that allows researchers to control gene expression with high specificity while leaving the sequence of the DNA unchanged. Designed by Whitehead Institute Member Jonathan Weissman, University of California San Francisco Assistant Professor Luke Gilbert, Weissman lab postdoc James Nuñez, and collaborators, the method is stable enough to be inherited through hundreds of cell divisions, and is also fully reversible.
“The big story here is we now have a simple tool that can silence the vast majority of genes,” says Weissman, who is also a professor of biology at MIT and an investigator with the Howard Hughes Medical Institute. “We can do this for multiple genes at the same time without any DNA damage, with great deal of homogeneity, and in a way that can be reversed. It's a great tool for controlling gene expression.”
The project was partially funded by a 2017 grant from the Defense Advanced Research Projects Agency to create a reversible gene editor. “Fast forward four years [from the initial grant], and CRISPRoff finally works as envisioned in a science fiction way,” says co-senior author Gilbert. “It's exciting to see it work so well in practice.”
Genetic engineering 2.0
The classic CRISPR-Cas9 system uses a DNA-cutting protein called Cas9 found in bacterial immune systems. The system can be targeted to specific genes in human cells using a single guide RNA, where the Cas9 proteins create tiny breaks in the DNA strand. Then the cell’s existing repair machinery patches up the holes.
Because these methods alter the underlying DNA sequence, they are permanent. Plus, their reliance on “in-house” cellular repair mechanisms means it is hard to limit the outcome to a single desired change. “As beautiful as CRISPR-Cas9 is, it hands off the repair to natural cellular processes, which are complex and multifaceted,” Weissman says. “It's very hard to control the outcomes.”
That’s where the researchers saw an opportunity for a different kind of gene editor — one that didn’t alter the DNA sequences themselves, but changed the way they were read in the cell.
This sort of modification is what scientists call “epigenetic” — genes may be silenced or activated based on chemical changes to the DNA strand. Problems with a cell’s epigenetics are responsible for many human diseases such as Fragile X syndrome and various cancers, and can be passed down through generations.
Epigenetic gene silencing often works through methylation — the addition of chemical tags to to certain places in the DNA strand — which causes the DNA to become inaccessible to RNA polymerase, the enzyme which reads the genetic information in the DNA sequence into messenger RNA transcripts, which can ultimately be the blueprints for proteins.
Weissman and collaborators had previously created two other epigenetic editors called CRISPRi and CRISPRa — but both of these came with a caveat. In order for them to work in cells, the cells had to be continually expressing artificial proteins to maintain the changes.
“With this new CRISPRoff technology, you can [express a protein briefly] to write a program that's remembered and carried out indefinitely by the cell,” says Gilbert. “It changes the game so now you're basically writing a change that is passed down through cell divisions — in some ways we can learn to create a version 2.0 of CRISPR-Cas9 that is safer and just as effective, and can do all these other things as well.”
Building the switch
To build an epigenetic editor that could mimic natural DNA methylation, the researchers created a tiny protein machine that, guided by small RNAs, can tack methyl groups onto specific spots on the strand. These methylated genes are then “silenced,” or turned off, hence the name CRISPRoff.
Because the method does not alter the sequence of the DNA strand, the researchers can reverse the silencing effect using enzymes that remove methyl groups, a method they called CRISPRon.
As they tested CRISPRoff in different conditions, the researchers discovered a few interesting features of the new system. For one thing, they could target the method to the vast majority of genes in the human genome — and it worked not just for the genes themselves, but also for other regions of DNA that control gene expression but do not code for proteins. “That was a huge shock even for us, because we thought it was only going to be applicable for a subset of genes,” says first author Nuñez.
Also, surprisingly to the researchers, CRISPRoff was even able to silence genes that did not have large methylated regions called CpG islands, which had previously been thought necessary to any DNA methylation mechanism.
“What was thought before this work was that the 30 percent of genes that do not have a CpG island were not controlled by DNA methylation,” Gilbert says. “But our work clearly shows that you don't require a CpG island to turn genes off by methylation. That, to me, was a major surprise.”
CRISPRoff in research and therapy
To investigate the potential of CRISPRoff for practical applications, the scientists tested the method in induced pluripotent stem cells. These are cells that can turn into countless cell types in the body depending on the cocktail of molecules they are exposed to, and thus are powerful models for studying the development and function of particular cell types.
The researchers chose a gene to silence in the stem cells, and then induced them to turn into nerve cells called neurons. When they looked for the same gene in the neurons, they discovered that it had remained silenced in 90 percent of the cells, revealing that cells retain a memory of epigenetic modifications made by the CRISPRoff system even as they change cell type.
They also selected one gene to use as an example of how CRISPRoff might be applied to therapeutics: the gene that codes for Tau protein, which is implicated in Alzheimer’s disease. After testing the method in neurons, they were able to show that using CRISPRoff could be used to turn Tau expression down, although not entirely off. “What we showed is that this is a viable strategy for silencing Tau and preventing that protein from being expressed,” Weissman says. “The question is, then, how do you deliver this to an adult? And would it really be enough to impact Alzheimer's? Those are big open questions, especially the latter.”
Even if CRISPRoff does not lead to Alzheimer’s therapies, there are many other conditions it could potentially be applied to. And while delivery to specific tissues remains a challenge for gene editing technologies such as CRISPRoff, “we showed that you can deliver it transiently as a DNA or as an RNA, the same technology that's the basis of the Moderna and BioNTech coronavirus vaccine,” Weissman says.
Weissman, Gilbert, and collaborators are enthusiastic about the potential of CRISPRoff for research as well. “Since we now can sort of silence any part of the genome that we want, it's a great tool for exploring the function of the genome,” Weissman says.
Plus, having a reliable system to alter a cell’s epigenetics could help researchers learn the mechanisms by which epigenetic modifications are passed down through cell divisions. “I think our tool really allows us to begin to study the mechanism of heritability, especially epigenetic heritability, which is a huge question in the biomedical sciences,” Nuñez says.
Professor Emeritus Ernest Cravalho, an expert in thermodynamics and pioneer in thermal fluids education, dies at 82
Ernest “Ernie” Cravalho, professor emeritus of mechanical engineering at MIT, passed away on Tuesday, April 13, at the age of 82. Cravalho served as a member of MIT’s mechanical engineering faculty for 44 years. Along with his many research contributions in the fields of thermodynamics, heat transfer, and bioengineering, Cravalho helped shape MIT’s thermodynamics education into what it is today.
Born in San Mateo, California, in 1939, Cravalho earned his bachelor’s degree, master’s degree, and PhD in mechanical engineering at the University of California at Berkeley. After receiving his doctoral degree in 1967, Cravalho made the move to the East Coast to join MIT’s Department of Mechanical Engineering as an assistant professor.
That same year, Cravalho collaborated with the late Professor Joseph Smith to launch a new thermodynamics course at MIT. The revamped course was in response to Cravalho’s feeling that previously, students hadn’t developed a practical and intuitive understanding of thermodynamics. Later on, Cravalho and Smith would work closely with John Brisson and Gareth McKinley, both professors in mechanical engineering, to develop the core thermal fluids class sequence currently taught to undergraduate students — 2.005 (Thermal-Fluids Engineering I) and 2.006 (Thermal-Fluids Engineering II).
As a researcher, Cravalho was recognized as an authority not only in thermodynamics and heat transfer, but also in the cryopreservation of biomaterials and energy conversion. Throughout his career, he published 200 research articles and five books — including “Engineering Thermodynamics” in 1981 — on these topics. He was seen as a pioneer in cryopreservation in particular.
Cravalho’s research earned him a number of awards and accolades, including membership in the American Society of Mechanical Engineers, the Institute of Medicine, and the National Academy of Sciences. He was also a founding fellow in the American Institute of Biological and Medical Sciences.
Throughout his four decades at MIT, Cravalho served in a number of roles across the Institute. From 1975 to 1977, he was associate dean of the School of Engineering. Immediately afterwards, he joined the Harvard-MIT Division of Health Sciences and Technology as associate director until 1982. He was the Edward Hood Taplin Professor of Medical Engineering and chief of biomedical engineering at Massachusetts General Hospital from 1986 to 1993. He then served as co-director of the MIT’s program in biomedical engineering until 1995.
While his roles and responsibilities at MIT continued to grow, Cravalho never lost his dedication to teaching. In 2000, he was named a Margaret MacVicar Faculty Fellow in recognition of his exemplary and sustained contributions to the teaching and education of undergraduates. The following year he received the Everett Moore Baker Memorial Award for Excellence in Undergraduate Teaching. In honor of his father, also named Ernest, Cravalho established the Ernest Cravalho Award for Outstanding Performance in Thermal Fluids Engineering, given to a mechanical engineering student each year.
Fairness is part of the promise of sports analytics. By judging an athlete’s performance through good data — as opposed to reputation, image, or outworn clichés — analytics creates the possibility that people can be judged more consistently on merit than often occurs elsewhere in life.
But that promise of fairness only goes so far in a sports world shaped by the same social forces as everything else: Men’s sports have traditionally commanded more resources than women’s sports, including access to data, and the analytics industry has not employed many women or people of color.
The 15th annual MIT Sloan Sports Analytics Conference (SSAC), held online April 8 and 9, placed these issues in the middle of its 2021 agenda. The industry-leading event, a high-profile yearly gathering hosted by the MIT Sloan School of Management, featured numerous panels and speakers focused on the crossroads of sports and society.
That emphasis follows a year of social change and protest, but it’s also borne out by viewership numbers. For instance, across all sports, viewership on television has been almost uniformly down during the Covid-19 pandemic — but, for the women’s NCAA basketball Final Four earlier this month, the semifinal ratings were up 22 percent compared to 2019, and the title game’s ratings were up 11 percent.
And at a time when sports executives and sponsors have fretted over athlete activism possibly conflicting with fan sensibilities, some conference participants offered that women’s sports are better-positioned to thrive through turbulence. WNBA star Sue Bird, for one, observed that women have long had to engage in battles for equal treatment and fair pay, meaning that being a high-flying female professional athlete has often necessitated having an activist’s outlook.
“I think our fanbase already knew what we were about,” said Bird, referring to the long-time embrace of social issues by many of the sport’s stars. She added: “It pays, metaphorically and literally, to be authentic.”
Whatever gains have been made, equity issues remain ever-present in sports, as evidenced by a controversy a few weeks ago over a strikingly substandard weight room provided for the women’s teams in the NCAA basketball tournament — itself a topic of conversation at SSAC.
“I don’t think women coming from the college basketball world were surprised by that,” said Sonia Raman, the former long-time women’s basketball coach at MIT. “At the NCAA level, the student-athlete experience, there needs to be parity in that experience.”
But equity in sports does look a bit different compared to even a couple years ago. Last fall, Raman accepted an assistant coach job with the NBA’s Memphis Grizzlies, in good part because of her reputation for intense preparation and openness to analytics, something she would share with her players at MIT.
“Analytics never gives you a cut and dried answer,” said Raman. “It might make you lean one way or another.” At MIT, she added, the coaching staff’s attitude toward metrics was, “Let’s have a conversation about it. We’d get to that point with our players where there was such a high level of trust, we could include them in the decisions, too.”
Players today increasingly say they are receptive to analytics — and not just marginal athletes looking for an edge to make a roster, but major stars.
“Hockey is so dynamic, I think there are endless opportunities [to find] things to measure,” said Hilary Knight, superstar of U.S. women’s hockey — and part of an all-female panel on hockey analytics at SSAC, something the sport’s old hands might have found mind-bending a few years ago.
J.J. Watt, the star defensive end of the NFL’s Arizona Cardinals, suggested that players will buy into analytics-based decisions — like aggressively going for it on fourth downs in football — as long as coaches are consistently committed to such tactics.
“If you’re going to believe the analytics and be an analytics team, you have to be an analytics team 100 percent of the time,” said Watt, making his first appearance at SSAC. If a team reverses course midseason and starts punting or kicking field goals more on the fourth down, he noted, “Then the players start to say, okay, what are we doing here?”
There are plenty of questions sports analysts are still trying to understand better, of course.
“It’s pretty hard to quantify defense with publicly available data,” said Alexandra Mandrycky, director of hockey strategy and research for the Seattle Kraken, the NHL’s new expansion team.
On the other hand, noted Andrew Friedman, president of baseball operation for the World Series champion Los Angeles Dodgers, baseball managers are making decisions by the numbers much more often than they used to: “Fifteen years ago you saw a lot more bad bets happening a lot more frequently,” he noted.
While demonstrating the evolving trends in analytics, the Sloan conference also offers historical perspective. The SSAC baseball panel this year included pioneering analyst Bill James, whose annual “Baseball Abstract” book, published from 1977 to 1988, brought “sabermetrics,” as he then called systematic baseball analysis, to a mainstream national audience for the first time.
Regarding the analytics boom, James said, a bit modestly, “I’ve always been given more credit” than is merited. He added: “I absolutely never envisioned to any extent whatsoever that sabermetrics might come to have the influence that it has had. That was a great shock to me, and still is every day.”
For a younger generation, though, there is no shock involved in using analytics — and if current trends continue, that should apply to teams of any gender, and at any level of sports.
“Embrace data,” said Knight. “It’s here, and it’s the future.”
Ultralight bosons are hypothetical particles whose mass is predicted to be less than a billionth the mass of an electron. They interact relatively little with their surroundings and have thus far eluded searches to confirm their existence. If they exist, ultralight bosons such as axions would likely be a form of dark matter, the mysterious, invisible stuff that makes up 85 percent of the matter in the universe.
Now, physicists at MIT’s LIGO Laboratory have searched for ultralight bosons using black holes — objects that are mind-bending orders of magnitude more massive than the particles themselves. According to the predictions of quantum theory, a black hole of a certain mass should pull in clouds of ultralight bosons, which in turn should collectively slow down a black hole’s spin. If the particles exist, then all black holes of a particular mass should have relatively low spins.
But the physicists have found that two previously detected black holes are spinning too fast to have been affected by any ultralight bosons. Because of their large spins, the black holes’ existence rules out the existence of ultralight bosons with masses between 1.3x10-13 electronvolts and 2.7x10-13 electronvolts — around a quintillionth the mass of an electron.
The team’s results, published today in Physical Review Letters, further narrow the search for axions and other ultralight bosons. The study is also the first to use the spins of black holes detected by LIGO and Virgo, and gravitational-wave data, to look for dark matter.
“There are different types of bosons, and we have probed one,” says co-author Salvatore Vitale, assistant professor of physics at MIT. “There may be others, and we can apply this analysis to the growing dataset that LIGO and Virgo will provide over the next few years.”
Vitale’s co-authors are lead author Kwan Yeung (Ken) Ng, a graduate student in MIT’s Kavli Institute for Astrophysics and Space Research, along with researchers at Utrecht University in the Netherlands and the Chinese University of Hong Kong.
A carousel’s energy
Ultralight bosons are being searched for across a huge range of super-light masses, from 1x10-33 electronvolts to 1x10-6 electronvolts. Scientists have so far used tabletop experiments and astrophysical observations to rule out slivers of this wide space of possible masses. Since the early 2000s, physicists proposed that black holes could be another means of detecting ultralight bosons, due to an effect known as superradiance.
If ultralight bosons exist, they could interact with a black hole under the right circumstances. Quantum theory posits that at a very small scale, particles cannot be described by classical physics, or even as individual objects. This scale, known as the Compton wavelength, is inversely proportional to the particle mass.
As ultralight bosons are exceptionally light, their wavelength is predicted to be exceptionally large. For a certain mass range of bosons, their wavelength can be comparable to the size of a black hole. When this happens, superradiance is expected to quickly develop. Ultralight bosons are then created from the vacuum around a black hole, in quantities large enough that the tiny particles collectively drag on the black hole and slow down its spin.
“If you jump onto and then down from a carousel, you can steal energy from the carousel,” Vitale says. “These bosons do the same thing to a black hole.”
Scientists believe this boson slow-down can occur over several thousand years — relatively quickly on astrophysical timescales.
“If bosons exist, we would expect that old black holes of the appropriate mass don’t have large spins, since the boson clouds would have extracted most of it,” Ng says. “This implies that the discovery of a black hole with large spins can rule out the existence of bosons with certain masses.”
Spin up, spin down
Ng and Vitale applied this reasoning to black hole measurements made by LIGO, the Laser Interferometer Gravitational-wave Observatory, and its companion detector Virgo. The detectors “listen” for gravitational waves, or reverberations from far-off cataclysms, such as merging black holes, known as binaries.
In their study, the team looked through all 45 black hole binaries reported by LIGO and Virgo to date. The masses of these black holes — between 10 and 70 times the mass of the sun — indicate that if they had interacted with ultralight bosons, the particles would have been between 1x10-13 electronvolts and 2x10-11 electronvolts in mass.
For every black hole, the team calculated the spin that it should have if the black hole was spun down by ultralight bosons within the corresponding mass range. From their analysis, two black holes stood out: GW190412 and GW190517. Just as there is a maximum velocity for physical objects — the speed of light — there is a top spin at which black holes can rotate. GW190517 is spinning at close to that maximum. The researchers calculated that if ultralight bosons existed, they would have dragged its spin down by a factor of two.
“If they exist, these things would have sucked up a lot of angular momentum,” Vitale says. “They’re really vampires.”
The researchers also accounted for other possible scenarios for generating the black holes’ large spins, while still allowing for the existence of ultralight bosons. For instance, a black hole could have been spun down by bosons but then subsequently sped up again through interactions with the surrounding accretion disk — a disk of matter from which the black hole could suck up energy and momentum.
“If you do the math, you find it takes too long to spin up a black hole to the level that we see here,” Ng says. “So, we can safely ignore this spin-up effect.”
In other words, it’s unlikely that the black holes’ high spins are due to an alternate scenario in which ultralight bosons also exist. Given the masses and high spins of both black holes, the researchers were able to rule out the existence of ultralight bosons with masses between 1.3x10-13 electronvolts and 2.7x10-13 electronvolts.
“We’ve basically excluded some type of bosons in this mass range,” Vitale says. “This work also shows how gravitational-wave detections can contribute to searches for elementary particles.”
This research was supported, in part, by the National Science Foundation.
The long-term goals of the Paris Agreement — keeping global warming well below 2 degrees Celsius and ideally 1.5 C in order to avert the worst impacts of climate change — may not be achievable by greenhouse gas emissions-reduction measures alone. Most scenarios for meeting these targets also require the deployment of negative emissions technologies (NETs) that remove carbon dioxide (CO2) from the atmosphere.
A leading NET candidate is bioenergy with carbon capture and storage (BECCS), which extracts energy from CO2-absorbing plants, captures CO2 that’s released into the atmosphere when the extracted plant matter is combusted, and stores it underground. The end-to-end process entails securing available land, cultivating and transporting crops, converting biomass into electricity with carbon capture, and transporting and storing the captured CO2.
On first glance, it may seem like a no-brainer to ramp up BECCS technology around the world to ensure that the international effort to stabilize the climate will succeed. But the prospect of cultivating plants for BECCS on a massive scale has raised concerns about adverse, unintended consequences. These include environmental impacts that range from soil erosion to biodiversity loss, and economic impacts, especially higher food prices that could result from redirecting vast tracts of agricultural land to draw down carbon emissions.
A new study in the journal Global Environmental Change focuses squarely on the economic implications of BECCS. Representing all major components of BECCS in the MIT Economic Projection and Policy Analysis (EPPA) model, researchers at the MIT Joint Program on the Science and Policy of Global Change and Imperial College London estimate the likely impacts of the technology on the global economy under climate policy scenarios that keep global warming below 1.5 C and 2 C, respectively.
They find that while it’s economically feasible to implement such policies without relying on BECCS, large-scale deployment of the technology in the second half of the century significantly lowers the overall implementation costs. Moreover, the inclusion of BECCS in these policies prevents widespread economic damages: in the 1.5 C scenario, global consumption decreases by almost 20 percent by 2100 without BECCS, but only by 5 percent with BECCS.
“Our modeling suggests that the benefits of BECCS far outweigh the costs,” says Howard Herzog, senior research engineer at the MIT Energy Initiative and co-author of the study. “In terms of costs, BECCS fares better than direct air capture, the other major negative emissions technology that uses carbon dioxide capture and storage (CCS).”
BECCS also significantly reduces the carbon prices associated with cap-and-trade policies designed to reduce emissions sufficiently to keep global warming below 1.5 C and 2 C. By creating negative emissions, the technology relieves pressure from the emissions cap and therefore lowers the price of emissions permits. At the same time, BECCS is compensated for its negative emissions through the carbon price, which is a substantial source of revenue.
“We conduct a series of experiments which robustly demonstrate that revenue from carbon permits is really driving the deployment of BECCS,” says Jennifer Morris, study co-author and research scientist at the MIT Joint Program and MIT Energy Initiative. “We find that the value of CO2 removal is far greater than the value of the electricity generation. Electricity is essentially a byproduct.”
Finally, the study concludes that while BECCS deployment results in major changes in land use to accommodate bioenergy crop cultivation consistent with meeting the 1.5 C and 2 C climate targets, it drives up the prices of food, livestock, and crops by less than 5 percent on average by 2100 (up to 15 percent in selected regions). Most notably, food prices rise by just 1.5 percent globally.
These results suggest that, in concert with dramatic emissions-reduction measures, BECCS could be an economically effective tool in the global effort to stabilize the climate.
“We have shown that large-scale deployment of BECCS could dramatically lower the costs of implementing policies aimed at meeting the long-term climate goals of the Paris Agreement, and avoid major price increases in agricultural commodities,” says MIT Joint Program deputy director and MITEI Senior Research Scientist Sergey Paltsev, who co-authored the study. “Further research is needed, however, to provide a more granular assessment of food supply chains and BECCS components, and to ensure that such deployment is politically viable.”
What makes urban labor markets more resilient? This is the question at the heart of a new study published in Nature Communications by members of MIT’s Connection Science Group. The researchers in this study, including MIT research scientist and Universidad Carlos III (Spain) Professor Esteban Moro; University of Pittsburgh professor and former MIT postdoc Frank Morgan, MIT Professor Alex "Sandy" Pentland, and Max Planck professor and former MIT professor Iyad Rahwan, drew on prior network modeling research to map the job landscapes in cities across the United States, and showed that job “connectedness” is a key determinant of the resilience of local economies.
Economists, policymakers, city planners, and companies have a strong interest in determining what factors contribute to healthy job markets, including what factors can help promote faster recovery after a shock, such as a major recession or the current Covid-19 pandemic. Traditional modeling approaches in this realm have treated workers as narrowly linked to specific jobs. In the real world, however, jobs and sectors are linked. Displaced workers can often transition to another job or sector requiring similar skills. In this way, job markets are much like ecosystems, where organisms are linked in a complex web of relationships.
In ecology and other domains where complex networks are present, resilience has been closely linked to the “connectedness” of the networks. In nature, for example, ecosystems with many mutualistic connections have proven more resistant to shocks, such as changes in acidity or temperature, than those with fewer connections. By drawing on ecosystem-inspired network models and extending the Nobel Prize-winning Pissarides-Mortensen job matching framework, the authors of the new study modeled the relationships between jobs in cities across the United States. Just as connectedness in nature fosters resilience, they predicted that cities with jobs connected by overlapping skills and geography would fare better in the face of economic shock than those without such networks.
To validate this, the researchers examined data from the Bureau of Labor Statistics for all metropolitan areas in the country from the onset to the end of the Great Recession (December 2007-June 2009). They were able to create job landscape maps for each area, including not just the numbers of specific jobs, but also their geographical distribution and the extent to which the skills they required overlapped with other jobs in the area. The size of a given city, as well as its employment diversity, played a role in resilience, with bigger, more diverse cities faring better than smaller and less-diverse ones. However, controlling for size and diversity, factoring in job connectivity significantly improved predictions of peak unemployment rates during the recession. Cities where job connectivity was highest leading up to the crash were significantly more resilient and recovered faster than those with less-connected markets.
Even in the absence of temporary crises like the Great Recession or the Covid-19 pandemic, automation promises to upend the employment landscapes of many areas in coming years. How can cities prepare for this disruption? The researchers in this study extended their model to predict how job markets would behave when facing job loss due to automation. They found that while cities of similar sizes would be affected similarly in the beginning phases of automation shocks, those with well-connected job networks would provide better opportunities for displaced workers to find other jobs. This provides a buffer against widespread unemployment, and in some cases even leads to more jobs being created in the aftermath of the initial automation shock. A city like Burlington, Vermont, where job connectivity is high, would fare much better than Bloomington, Indiana, a similar-sized city where job connectivity is low.
The findings of the study suggest that policymakers should consider job connectivity when planning for the future of work in their regions, especially where automation is expected to replace large numbers of jobs. Not only does increased connectivity result in lower unemployment — it also contributes to a rise in overall wages. Furthermore, in individual occupations, workers in jobs that are more “embedded” (connected to other jobs) in a region earn higher wages than similar workers in areas where those jobs are not as connected.
These results offer fresh insight to help steer discussions about the Future of Work and may help guide and complement current decisions about where to invest in job creation and training programs.
MIT Connection Science is a research group hosted by the Sociotechnical Systems Research Center, a part of the Institute for Data, Systems, and Society.