MIT Latest News

Engineering joy
When the late professor emeritus Woodie Flowers SM ’68, MEng ’71, PhD ’73 was a student at MIT, most of his classes involved paper-and-pencil exercises with predetermined solutions. Flowers had an affinity for making things, and for making them work. When he transitioned from student to teacher, he chose to carry this approach into his method of instruction and, in doing so, he helped change the way engineering students are educated — at MIT, and around the world.
Flowers passed away in 2019, but his legacy lives on, and the magnitude of the educational revolution he helped to evolve was profound.
In the 1970s, Flowers took over instruction of 2.70, now called class 2.007 (Design and Manufacturing I). The capstone course is one that many first-year students today look forward to taking, but that wasn’t always the case. Before Flowers took over instruction, class instruction relied heavily on chalkboard demonstrations.
“Their idea of design at the time was to draw drawings of parts,” explains Professor Emeritus David Gossard PhD ’75, Flowers’ longtime friend and colleague. “Woody had a different idea. Give the entire class a kit of materials [and] a common goal, which was to build a machine — to climb a hill, or pick up golf balls, or whatever it did — and make a contest out of it. It was a phenomenal success. The kids loved it, the faculty loved it, the Institute loved it. And over a period of years, it became, I think it's fair to say, an institution.”
With Flowers at the lead, 2.70 transformed into a project-based, get-your-hands-dirty, robotics-competition-focused experience. By all accounts, he also made the experience incredibly fun — something he valued in his own life. He was fond of skydiving and was often seen rollerblading through the Infinite Corridor. The course, informed by his unique style, was at the forefront of a revolution in engineering education, and it quickly helped solidify the Department of Mechanical Engineering’s reputation for innovative education.
“A lot of kids had never started from scratch and built anything,” Flowers once told The Boston Globe. His advisor, Robert Mann, had similar beliefs in a hands-on, modern pedagogy. Building on Mann’s philosophy, and incorporating his own approach, Flowers breathed new life and provided a new foundation for “the MIT way” of teaching. This was a reinvigoration at the right place and the right time that ultimately had a global butterfly effect on the popularity of science, technology, engineering, and math (STEM) instruction.
“Over the years lectures had displaced the hands-on stuff, and Woodie brought it back,” says Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering. “I can’t think of a single person to have impacted the field of robotics and design in undergraduate, or high school, education as much as Woodie.”
Flowers became interested in mechanical engineering and design at a young age, thanks in large part to his parents. His father was a welder with a penchant for tinkering, inventing, and building, his mother was an elementary school teacher. Flowers grew up taking things apart and putting them back together — an activity which he seemed to believe made students better engineers.
Speaking in 2010 with InfiniteMIT, a digital archive of Institute history made possible by the generosity of Jane and A. Neil Pappalardo ’64, Flowers shared a story about a student who had accepted the task in her group of finding out whether a piece of reinforcement steel rebar could be bent into a tight loop and serve as a bearing.
“She came into lab and I was there early, and she had a slightly bent piece of rebar. It had been heated — you could tell that it had been hot, and she was going to report that she really can’t do that, it just kind of doesn’t work,” Flowers recalled. He suggested they try another approach.
“We went out in the lab and I found another big steel bar and I found the biggest vice I could find,” he continued. Flowers cranked the rebar down against the piece of steel he was going to wrap it around, then took a four-pound sledgehammer to it. “My father had a blacksmith pit, so that was familiar to me. I wrapped [the rebar around the steel and] made a fine bearing. As I finish the last blow, I looked up and three of the best students in the class — really sharp people — were standing there with their jaw open. They’d never seen anyone hit a piece of steel hard enough to just mold it.”
He continued, “that visceral understanding of the behavior of mechanics is really important. It doesn’t fall out of the sky and it certainly doesn’t come out of a textbook, it comes through real interaction. I believe I had been so lucky because when I encountered Castiglione’s theorem about deflection of materials, it kind of made sense.”
Course 2.70/2.007 is considered a landmark class in engineering education. It was one of the first hands-on classes to teach students not only how to design an object, but also how to build it and, by demonstrating the value of practical, project-based learning and robotics competitions, it has influenced the approach taken by many other programs. Today, it continues to develop students’ competence and self-confidence as design engineers, with an emphasis on the creative design process bolstered by application of physical laws, robustness, and manufacturability.
Notably, the course also served as the inspiration for development of the FIRST Robotics program, which Flowers and inventor Dean Kamen started in 1989. FIRST has programs for preschool through high school students and, to date, more than 3.2 million youth from more than 100 countries have participated in FIRST competitions.
In the 1970s, the parts kit — or as Flowers fondly referred to it, the “bag of junk” — included things like springs, tongue depressors, and rubber bands. Flowers’ wife Margaret recalls spending many nights packing these kits and hosting advisees in their home. “We considered ourselves a team,” she says.
Today, in addition to using the kit of mechanical parts and materials, students in 2.007 might develop 3D printed components, and they incorporate electronics in their robots for an autonomous portion of the final competition.
The spring 2024 competition, themed after Cartoon Network’s popular animated science fiction sitcom “Rick and Morty,” featured a spaceship that students’ robots could interact with for points, vats of “acid” where balls could be collected and placed in tubes, and game pieces that paid homage to iconic episodes. The final task required the robot to travel up an elevator and send a character down a zipline.
In recent years, other themes have centered on tasks related to stories ranging from “Star Wars” to “Back to the Future” and “Wakanda Forever.” The 2022 theme, however, may have been the most poignant theme to date: “Legacy,” a celebration of Flowers’ life and work.
“[Woodie] revealed, unambiguously, that designing, fabricating, assembling and building things was fun,” says Gossard. “It was arguably the essence of engineering. There was joy in it.”
A version of this article appears in the Spring 2025 issue of MechE Connects, the magazine of the MIT Department of Mechanical Engineering.
Creating a common language
A lot has changed in the 15 years since Kaiming He was a PhD student.
“When you are in your PhD stage, there is a high wall between different disciplines and subjects, and there was even a high wall within computer science,” He says. “The guy sitting next to me could be doing things that I completely couldn’t understand.”
In the seven months since he joined the MIT Schwarzman College of Computing as the Douglas Ross (1954) Career Development Professor of Software Technology in the Department of Electrical Engineering and Computer Science, He says he is experiencing something that in his opinion is “very rare in human scientific history” — a lowering of the walls that expands across different scientific disciplines.
“There is no way I could ever understand high-energy physics, chemistry, or the frontier of biology research, but now we are seeing something that can help us to break these walls,” He says, “and that is the creation of a common language that has been found in AI.”
Building the AI bridge
According to He, this shift began in 2012 in the wake of the “deep learning revolution,” a point when it was realized that this set of machine-learning methods based on neural networks was so powerful that it could be put to greater use.
“At this point, computer vision — helping computers to see and perceive the world as if they are human beings — began growing very rapidly, because as it turns out you can apply this same methodology to many different problems and many different areas,” says He. “So the computer vision community quickly grew really large because these different subtopics were now able to speak a common language and share a common set of tools.”
From there, He says the trend began to expand to other areas of computer science, including natural language processing, speech recognition, and robotics, creating the foundation for ChatGPT and other progress toward artificial general intelligence (AGI).
“All of this has happened over the last decade, leading us to a new emerging trend that I am really looking forward to, and that is watching AI methodology propagate other scientific disciplines,” says He.
One of the most famous examples, He says, is AlphaFold, an artificial intelligence program developed by Google DeepMind, which performs predictions of protein structure.
“It’s a very different scientific discipline, a very different problem, but people are also using the same set of AI tools, the same methodology to solve these problems,” He says, “and I think that is just the beginning.”
The future of AI in science
Since coming to MIT in February 2024, He says he has talked to professors in almost every department. Some days he finds himself in conversation with two or more professors from very different backgrounds.
“I certainly don’t fully understand their area of research, but they will just introduce some context and then we can start to talk about deep learning, machine learning, [and] neural network models in their problems,” He says. “In this sense, these AI tools are like a common language between these scientific areas: the machine learning tools ‘translate’ their terminology and concepts into terms that I can understand, and then I can learn their problems and share my experience, and sometimes propose solutions or opportunities for them to explore.”
Expanding to different scientific disciplines has significant potential, from using video analysis to predict weather and climate trends to expediting the research cycle and reducing costs in relation to new drug discovery.
While AI tools provide a clear benefit to the work of He’s scientist colleagues, He also notes the reciprocal effect they can have, and have had, on the creation and advancement of AI.
“Scientists provide new problems and challenges that help us continue to evolve these tools,” says He. “But it is also important to remember that many of today’s AI tools stem from earlier scientific areas — for example, artificial neural networks were inspired by biological observations; diffusion models for image generation were motivated from the physics term.”
“Science and AI are not isolated subjects. We have been approaching the same goal from different perspectives, and now we are getting together.”
And what better place for them to come together than MIT.
“It is not surprising that MIT can see this change earlier than many other places,” He says. “[The MIT Schwarzman College of Computing] created an environment that connects different people and lets them sit together, talk together, work together, exchange their ideas, while speaking the same language — and I’m seeing this begin to happen.”
In terms of when the walls will fully lower, He notes that this is a long-term investment that won’t happen overnight.
“Decades ago, computers were considered high tech and you needed specific knowledge to understand them, but now everyone is using a computer,” He says. “I expect in 10 or more years, everyone will be using some kind of AI in some way for their research — it’s just their basic tools, their basic language, and they can use AI to solve their problems.”
Validation technique could help scientists make more accurate forecasts
Should you grab your umbrella before you walk out the door? Checking the weather forecast beforehand will only be helpful if that forecast is accurate.
Spatial prediction problems, like weather forecasting or air pollution estimation, involve predicting the value of a variable in a new location based on known values at other locations. Scientists typically use tried-and-true validation methods to determine how much to trust these predictions.
But MIT researchers have shown that these popular validation methods can fail quite badly for spatial prediction tasks. This might lead someone to believe that a forecast is accurate or that a new prediction method is effective, when in reality that is not the case.
The researchers developed a technique to assess prediction-validation methods and used it to prove that two classical methods can be substantively wrong on spatial problems. They then determined why these methods can fail and created a new method designed to handle the types of data used for spatial predictions.
In experiments with real and simulated data, their new method provided more accurate validations than the two most common techniques. The researchers evaluated each method using realistic spatial problems, including predicting the wind speed at the Chicago O-Hare Airport and forecasting the air temperature at five U.S. metro locations.
Their validation method could be applied to a range of problems, from helping climate scientists predict sea surface temperatures to aiding epidemiologists in estimating the effects of air pollution on certain diseases.
“Hopefully, this will lead to more reliable evaluations when people are coming up with new predictive methods and a better understanding of how well methods are performing,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society, and an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Broderick is joined on the paper by lead author and MIT postdoc David R. Burt and EECS graduate student Yunyi Shen. The research will be presented at the International Conference on Artificial Intelligence and Statistics.
Evaluating validations
Broderick’s group has recently collaborated with oceanographers and atmospheric scientists to develop machine-learning prediction models that can be used for problems with a strong spatial component.
Through this work, they noticed that traditional validation methods can be inaccurate in spatial settings. These methods hold out a small amount of training data, called validation data, and use it to assess the accuracy of the predictor.
To find the root of the problem, they conducted a thorough analysis and determined that traditional methods make assumptions that are inappropriate for spatial data. Evaluation methods rely on assumptions about how validation data and the data one wants to predict, called test data, are related.
Traditional methods assume that validation data and test data are independent and identically distributed, which implies that the value of any data point does not depend on the other data points. But in a spatial application, this is often not the case.
For instance, a scientist may be using validation data from EPA air pollution sensors to test the accuracy of a method that predicts air pollution in conservation areas. However, the EPA sensors are not independent — they were sited based on the location of other sensors.
In addition, perhaps the validation data are from EPA sensors near cities while the conservation sites are in rural areas. Because these data are from different locations, they likely have different statistical properties, so they are not identically distributed.
“Our experiments showed that you get some really wrong answers in the spatial case when these assumptions made by the validation method break down,” Broderick says.
The researchers needed to come up with a new assumption.
Specifically spatial
Thinking specifically about a spatial context, where data are gathered from different locations, they designed a method that assumes validation data and test data vary smoothly in space.
For instance, air pollution levels are unlikely to change dramatically between two neighboring houses.
“This regularity assumption is appropriate for many spatial processes, and it allows us to create a way to evaluate spatial predictors in the spatial domain. To the best of our knowledge, no one has done a systematic theoretical evaluation of what went wrong to come up with a better approach,” says Broderick.
To use their evaluation technique, one would input their predictor, the locations they want to predict, and their validation data, then it automatically does the rest. In the end, it estimates how accurate the predictor’s forecast will be for the location in question. However, effectively assessing their validation technique proved to be a challenge.
“We are not evaluating a method, instead we are evaluating an evaluation. So, we had to step back, think carefully, and get creative about the appropriate experiments we could use,” Broderick explains.
First, they designed several tests using simulated data, which had unrealistic aspects but allowed them to carefully control key parameters. Then, they created more realistic, semi-simulated data by modifying real data. Finally, they used real data for several experiments.
Using three types of data from realistic problems, like predicting the price of a flat in England based on its location and forecasting wind speed, enabled them to conduct a comprehensive evaluation. In most experiments, their technique was more accurate than either traditional method they compared it to.
In the future, the researchers plan to apply these techniques to improve uncertainty quantification in spatial settings. They also want to find other areas where the regularity assumption could improve the performance of predictors, such as with time-series data.
This research is funded, in part, by the National Science Foundation and the Office of Naval Research.
Cleaning up critical minerals and materials production, using microwave plasma
The push to bring manufacturing back to the U.S. is running up against an unfortunate truth: The processes for making many critical materials today create toxic byproducts and other environmental hazards. That’s true for commonly used industrial metals like nickel and titanium, as well as specialty minerals, materials, and coatings that go into batteries, advanced electronics, and defense applications.
Now 6K, founded by former MIT research scientist Kamal Hadidi, is using a new production process to bring critical materials production back to America without the toxic byproducts.
The company is actively scaling its microwave plasma technology, which it calls UniMelt, to transform the way critical minerals are processed, creating new domestic supply chains in the process. UniMelt uses beams of tightly controlled thermal plasma to melt or vaporize precursor materials into particles with precise sizes and crystalline phases.
The technology converts metals, such as titanium, nickel, and refractory alloys, into particles optimized for additive manufacturing for a range of industrial applications. It is also being used to create battery materials for electric vehicles, grid infrastructure, and data centers.
“The markets and critical materials we are focused on are important for not just economic reasons but also U.S. national security, because the bulk of these materials are manufactured today in nonfriendly countries,” 6K CEO Saurabh Ullal says. “Now, the [U.S. government] and our growing customer base can leverage this technology invented at MIT to make the U.S. less dependent on these nonfriendly countries, ensuring supply chain independence now and in the future.”
Named after the 6,000-degree temperature of its plasma, 6K is currently selling its high-performance metal powders to parts manufacturers as well as defense, automotive, medical, and oil and gas companies for use in applications from engine components and medical implants to rockets. To scale its battery materials business, 6K is also building a 100,000-square-foot production facility in Jackson, Tennessee, which will begin construction later this year.
A weekend project
Between 1994 and 2007, Hadidi worked at the Plasma Science and Fusion Center (PFSC), where he developed plasma technologies for a range of applications, including hydrogen production, fuel reforming, and detecting environmental toxins. His first company was founded in 2000 out of the PFSC to detect mercury in coal-fired power plants’ smokestacks.
“I loved working at MIT,” Hadidi says. “It’s an amazing place that really challenges you. Just being there is so stimulating because everyone’s trying to come up with new solutions and connect dots between different fields.”
Hadidi also began using high-frequency microwave plasmas to create nanomaterials for use in optical applications. He wasn’t a materials expert, so he collaborated with Professor Eric Jordan, a materials synthesis expert from the University of Connecticut, and the researchers started working on nights and weekends in the PSFC to develop the idea further, eventually patenting the technology.
Hadidi officially founded the company as Amastan in 2007, exploring the use of his microwave plasma technology, later named UniMelt for “uniform melt state process,” to make a host of different materials as part of a government grant he and Jordan received.
The researchers soon realized the microwave plasma technology had several advantages over traditional production techniques for certain materials. For one, it could eliminate several high-energy steps of conventional processes, reducing production times from days to hours in some cases. For batteries and certain critical minerals, the process also works with recycled feedstocks. Amastan was renamed 6K in 2019.
Early on, Hadidi produced metal powders used in additive manufacturing through a process called spheroidization, which results in dense, spherical powders that flow well and make high-performance 3D-printed parts.
Following another grant, Hadidi explored methods for producing a type of battery cathode made from lithium, nickel, manganese, and cobalt (NMC). The standard process for making NMCs involved chemical synthesis, precipitation, heat treatment, and a lot of water. 6K is able to reduce many of those steps, speeding up production and lowering costs while also being more sustainable.
“Our technology completely eliminates toxic waste and recycles all of the byproducts back through the process to utilize everything, including water,” Ullal says.
Scaling domestic production
Today, 6K’s additive manufacturing arm operates out of a factory in Pennsylvania. The company’s critical minerals processing, refining, and recycling systems can produce about 400 tons of material per year and can be used to make more than a dozen types of metal powders. The company also has 33,000-square-foot battery center in North Andover, Massachusetts, where it produces battery cathode materials for its energy storage and mobility customers.
The Tennessee facility will be used to produce battery cathode materials and represents a massive step up in throughput. The company says it will be able to produce 13,000 tons of material annually when construction is complete next year.
“I’m happy if what I started brings something positive to society, and I’m extremely thankful to all the people that helped me,” says Hadidi, who left the company in 2019. “I’m an entrepreneur at heart. I like to make things. But that doesn’t mean I always succeed. It’s personally very satisfying to see this make an impact.”
The 6K team says its technology can also create a variety of specialty ceramics, advanced coatings, and nanoengineered materials. They say it may also be used to eliminate PFAS, or “forever chemicals,” though that work is at an early stage.
The company recently received a grant to demonstrate a process for recycling critical materials from military depots to produce aerospace and defense products, creating a new value stream for these materials that would otherwise deteriorate or go to landfill. That work is consistent with the company’s motto, “We take nothing from the ground and put nothing into the ground.”
The company’s additive division recently received a $23.4 Defense Production Act grant “that will enable us to double processing capacity in the next three years,” Ullal says. “The next step is to scale battery materials production to the tens of thousands of tons per year. At this point, it’s a scale-up of known processes, and we just need to execute. The idea of creating a circular economy is near and dear to us because that’s how we’ve built this company and that’s how we generate value: addressing our U.S. national security concerns and protecting the planet as well.”
MIT method enables ultrafast protein labeling of tens of millions of densely packed cells
A new technology developed at MIT enables scientists to label proteins across millions of individual cells in fully intact 3D tissues with unprecedented speed, uniformity, and versatility. Using the technology, the team was able to richly label large tissue samples in a single day. In their new study in Nature Biotechnology, they also demonstrate that the ability to label proteins with antibodies at the single-cell level across large tissue samples can reveal insights left hidden by other widely used labeling methods.
Profiling the proteins that cells are making is a staple of studies in biology, neuroscience, and related fields because the proteins a cell is expressing at a given moment can reflect the functions the cell is trying to perform or its response to its circumstances, such as disease or treatment. As much as microscopy and labeling technologies have advanced, enabling innumerable discoveries, scientists have still lacked a reliable and practical way of tracking protein expression at the level of millions of densely packed individual cells in whole, 3D intact tissues. Often confined to thin tissue sections under slides, scientists therefore haven’t had tools to thoroughly appreciate cellular protein expression in the whole, connected systems in which it occurs.
“Conventionally, investigating the molecules within cells requires dissociating tissue into single cells or slicing it into thin sections, as light and chemicals required for analysis cannot penetrate deep into tissues. Our lab developed technologies such as CLARITY and SHIELD, which enable investigation of whole organs by rendering them transparent, but we now needed a way to chemically label whole organs to gain useful scientific insights,” says study senior author Kwanghun Chung, associate professor in The Picower Institute for Learning and Memory, the departments of Chemical Engineering and Brain and Cognitive Sciences, and the Institute for Medical Engineering and Science at MIT. “If cells within a tissue are not uniformly processed, they cannot be quantitatively compared. In conventional protein labeling, it can take weeks for these molecules to diffuse into intact organs, making uniform chemical processing of organ-scale tissues virtually impossible and extremely slow.”
The new approach, called “CuRVE,” represents a major advance — years in the making — toward that goal by demonstrating a fundamentally new approach to uniformly processing large and dense tissues whole. In the study, the researchers explain how they overcame the technical barriers via an implementation of CuRVE called “eFLASH,” and provide copious vivid demonstrations of the technology, including how it yielded new neuroscience insights.
“This is a significant leap, especially in terms of the actual performance of the technology,” says co-lead author Dae Hee Yun PhD '24, a recent MIT graduate student who is now a senior application engineer at LifeCanvas Technologies, a startup company Chung founded to disseminate the tools his lab invents. The paper’s other lead author is Young-Gyun Park, a former MIT postdoc who’s now an assistant professor at KAIST in South Korea.
Clever chemistry
The fundamental reason why large, 3D tissue samples are hard to label uniformly is that antibodies seep into tissue very slowly, but are quick to bind to their target proteins. The practical effect of this speed mismatch is that simply soaking a brain in a bath of antibodies will mean that proteins are intensely well labeled on the outer edge of the tissue, but virtually none of the antibodies will find cells and proteins deeper inside.
To improve labeling, the team conceived of a way — the conceptual essence of CuRVE — to resolve the speed mismatch. The strategy was to continuously control the pace of antibody binding while at the same time speeding up antibody permeation throughout the tissue. To figure out how this could work and to optimize the approach, they built and ran a sophisticated computational simulation that enabled them to test different settings and parameters, including different binding rates and tissue densities and compositions.
Then they set out to implement their approach in real tissues. Their starting point was a previous technology, called “SWITCH,” in which Chung’s lab devised a way of temporarily turning off antibody binding, letting the antibodies permeate the tissue, and then turning binding back on. As well as it worked, Yun says, the team realized there could be substantial improvements if antibody binding speed could be controlled constantly, but the chemicals used in SWITCH were too harsh for such ongoing treatment. So the team screened a library of similar chemicals to find one that could more subtly and continuously throttle antibody binding speed. They found that deoxycholic acid was an ideal candidate. Using that chemical, the team could not only modulate antibody binding by varying the chemical’s concentration, but also by varying the labeling bath’s pH (or acidity).
Meanwhile, to speed up antibody movement through tissues, the team used another prior technology invented in the Chung Lab: stochastic electrotransport. That technology accelerates the dispersion of antibodies through tissue by applying electric fields.
Implementing this eFLASH system of accelerated dispersion with continuously modifiable binding speed produced the wide variety of labeling successes demonstrated in the paper. In all, the team reported using more than 60 different antibodies to label proteins in cells across large tissue samples.
Notably, each of these specimens was labeled within a day, an “ultra-fast” speed for whole, intact organs, the authors say. Moreover, different preparations did not require new optimization steps.
Valuable visualizations
Among the ways the team put eFLASH to the test was by comparing their labeling to another often-used method: genetically engineering cells to fluoresce when the gene for a protein of interest is being transcribed. The genetic method doesn’t require dispersing antibodies throughout tissue, but it can be prone to discrepancies because reporting gene transcription and actual protein production are not exactly the same thing. Yun added that while antibody labeling reliably and immediately reports on the presence of a target protein, the genetic method can be much less immediate and persistent, still fluorescing even when the actual protein is no longer present.
In the study the team employed both kinds of labeling simultaneously in samples. Visualizing the labels that way, they saw many examples in which antibody labeling and genetic labeling differed widely. In some areas of mouse brains, they found that two-thirds of the neurons expressing PV (a protein prominent in certain inhibitory neurons) according to antibody labeling, did not show any genetically-based fluorescence. In another example, only a tiny fraction of cells that reported expression via the genetic method of a protein called ChAT also reported it via antibody labeling. In other words, there were cases where genetic labeling both severely underreported or overreported protein expression compared to antibody labeling.
The researchers don’t mean to impugn the clear value of using the genetic reporting methods, but instead suggest that also using organ-wide antibody labeling, as eFLASH allows, can help put that data in a richer, more complete context. “Our discovery of large regionalized loss of PV-immunoreactive neurons in healthy adult mice and with high individual variability emphasizes the importance of holistic and unbiased phenotyping,” the authors write.
Or as Yun puts it, the two different kinds of labeling are “two different tools for the job.”
In addition to Yun, Park, and Chung, the paper’s other authors are Jae Hun Cho, Lee Kamentsky, Nicholas Evans, Nicholas DiNapoli, Katherine Xie, Seo Woo Choi, Alexandre Albanese, Yuxuan Tian, Chang Ho Sohn, Qiangge Zhang, Minyoung Kim, Justin Swaney, Webster Guan, Juhyuk Park, Gabi Drummond, Heejin Choi, Luzdary Ruelas, and Guoping Feng.
Funding for the study came from the Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award in Science and Engineering, a NARSAD Young Investigator Award, the McKnight Foundation, the Freedom Together Foundation, The Picower Institute for Learning and Memory, the NCSOFT Cultural Foundation, and the National Institutes of Health.
Streamlining data collection for improved salmon population management
Sara Beery came to MIT as an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) eager to focus on ecological challenges. She has fashioned her research career around the opportunity to apply her expertise in computer vision, machine learning, and data science to tackle real-world issues in conservation and sustainability. Beery was drawn to the Institute’s commitment to “computing for the planet,” and set out to bring her methods to global-scale environmental and biodiversity monitoring.
In the Pacific Northwest, salmon have a disproportionate impact on the health of their ecosystems, and their complex reproductive needs have attracted Beery’s attention. Each year, millions of salmon embark on a migration to spawn. Their journey begins in freshwater stream beds where the eggs hatch. Young salmon fry (newly hatched salmon) make their way to the ocean, where they spend several years maturing to adulthood. As adults, the salmon return to the streams where they were born in order to spawn, ensuring the continuation of their species by depositing their eggs in the gravel of the stream beds. Both male and female salmon die shortly after supplying the river habitat with the next generation of salmon.
Throughout their migration, salmon support a wide range of organisms in the ecosystems they pass through. For example, salmon bring nutrients like carbon and nitrogen from the ocean upriver, enhancing their availability to those ecosystems. In addition, salmon are key to many predator-prey relationships: They serve as a food source for various predators, such as bears, wolves, and birds, while helping to control other populations, like insects, through predation. After they die from spawning, the decomposing salmon carcasses also replenish valuable nutrients to the surrounding ecosystem. The migration of salmon not only sustains their own species but plays a critical role in the overall health of the rivers and oceans they inhabit.
At the same time, salmon populations play an important role both economically and culturally in the region. Commercial and recreational salmon fisheries contribute significantly to the local economy. And for many Indigenous peoples in the Pacific northwest, salmon hold notable cultural value, as they have been central to their diets, traditions, and ceremonies.
Monitoring salmon migration
Increased human activity, including overfishing and hydropower development, together with habitat loss and climate change, have had a significant impact on salmon populations in the region. As a result, effective monitoring and management of salmon fisheries is important to ensure balance among competing ecological, cultural, and human interests. Accurately counting salmon during their seasonal migration to their natal river to spawn is essential in order to track threatened populations, assess the success of recovery strategies, guide fishing season regulations, and support the management of both commercial and recreational fisheries. Precise population data help decision-makers employ the best strategies to safeguard the health of the ecosystem while accommodating human needs. Monitoring salmon migration is a labor-intensive and inefficient undertaking.
Beery is currently leading a research project that aims to streamline salmon monitoring using cutting-edge computer vision methods. This project fits within Beery’s broader research interest, which focuses on the interdisciplinary space between artificial intelligence, the natural world, and sustainability. Its relevance to fisheries management made it a good fit for funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Beery’s 2023 J-WAFS seed grant was the first research funding she was awarded since joining the MIT faculty.
Historically, monitoring efforts relied on humans to manually count salmon from riverbanks using eyesight. In the past few decades, underwater sonar systems have been implemented to aid in counting the salmon. These sonar systems are essentially underwater video cameras, but they differ in that they use acoustics instead of light sensors to capture the presence of a fish. Use of this method requires people to set up a tent alongside the river to count salmon based on the output of a sonar camera that is hooked up to a laptop. While this system is an improvement to the original method of monitoring salmon by eyesight, it still relies significantly on human effort and is an arduous and time-consuming process.
Automating salmon monitoring is necessary for better management of salmon fisheries. “We need these technological tools,” says Beery. “We can’t keep up with the demand of monitoring and understanding and studying these really complex ecosystems that we work in without some form of automation.”
In order to automate counting of migrating salmon populations in the Pacific Northwest, the project team, including Justin Kay, a PhD student in EECS, has been collecting data in the form of videos from sonar cameras at different rivers. The team annotates a subset of the data to train the computer vision system to autonomously detect and count the fish as they migrate. Kay describes the process of how the model counts each migrating fish: “The computer vision algorithm is designed to locate a fish in the frame, draw a box around it, and then track it over time. If a fish is detected on one side of the screen and leaves on the other side of the screen, then we count it as moving upstream.” On rivers where the team has created training data for the system, it has produced strong results, with only 3 to 5 percent counting error. This is well below the target that the team and partnering stakeholders set of no more than a 10 percent counting error.
Testing and deployment: Balancing human effort and use of automation
The researchers’ technology is being deployed to monitor the migration of salmon on the newly restored Klamath River. Four dams on the river were recently demolished, making it the largest dam removal project in U.S. history. The dams came down after a more than 20-year-long campaign to remove them, which was led by Klamath tribes, in collaboration with scientists, environmental organizations, and commercial fishermen. After the removal of the dams, 240 miles of the river now flow freely and nearly 800 square miles of habitat are accessible to salmon. Beery notes the almost immediate regeneration of salmon populations in the Klamath River: “I think it was within eight days of the dam coming down, they started seeing salmon actually migrate upriver beyond the dam.” In a collaboration with California Trout, the team is currently processing new data to adapt and create a customized model that can then be deployed to help count the newly migrating salmon.
One challenge with the system revolves around training the model to accurately count the fish in unfamiliar environments with variations such as riverbed features, water clarity, and lighting conditions. These factors can significantly alter how the fish appear on the output of a sonar camera and confuse the computer model. When deployed in new rivers where no data have been collected before, like the Klamath, the performance of the system degrades and the margin of error increases substantially to 15-20 percent.
The researchers constructed an automatic adaptation algorithm within the system to overcome this challenge and create a scalable system that can be deployed to any site without human intervention. This self-initializing technology works to automatically calibrate to the new conditions and environment to accurately count the migrating fish. In testing, the automatic adaptation algorithm was able to reduce the counting error down to the 10 to 15 percent range. The improvement in counting error with the self-initializing function means that the technology is closer to being deployable to new locations without much additional human effort.
Enabling real-time management with the “Fishbox”
Another challenge faced by the research team was the development of an efficient data infrastructure. In order to run the computer vision system, the video produced by sonar cameras must be delivered via the cloud or by manually mailing hard drives from a river site to the lab. These methods have notable drawbacks: a cloud-based approach is limited due to lack of internet connectivity in remote river site locations, and shipping the data introduces problems of delay.
Instead of relying on these methods, the team has implemented a power-efficient computer, coined the “Fishbox,” that can be used in the field to perform the processing. The Fishbox consists of a small, lightweight computer with optimized software that fishery managers can plug into their existing laptops and sonar cameras. The system is then capable of running salmon counting models directly at the sonar sites without the need for internet connectivity. This allows managers to make hour-by-hour decisions, supporting more responsive, real-time management of salmon populations.
Community development
The team is also working to bring a community together around monitoring for salmon fisheries management in the Pacific Northwest. “It’s just pretty exciting to have stakeholders who are enthusiastic about getting access to [our technology] as we get it to work and having a tighter integration and collaboration with them,” says Beery. “I think particularly when you’re working on food and water systems, you need direct collaboration to help facilitate impact, because you're ensuring that what you develop is actually serving the needs of the people and organizations that you are helping to support.”
This past June, Beery’s lab organized a workshop in Seattle that convened nongovernmental organizations, tribes, and state and federal departments of fish and wildlife to discuss the use of automated sonar systems to monitor and manage salmon populations. Kay notes that the workshop was an “awesome opportunity to have everybody sharing different ways that they're using sonar and thinking about how the automated methods that we’re building could fit into that workflow.” The discussion continues now via a shared Slack channel created by the team, with over 50 participants. Convening this group is a significant achievement, as many of these organizations would not otherwise have had an opportunity to come together and collaborate.
Looking forward
As the team continues to tune the computer vision system, refine their technology, and engage with diverse stakeholders — from Indigenous communities to fishery managers — the project is poised to make significant improvements to the efficiency and accuracy of salmon monitoring and management in the region. And as Beery advances the work of her MIT group, the J-WAFS seed grant is helping to keep challenges such as fisheries management in her sights.
“The fact that the J-WAFS seed grant existed here at MIT enabled us to continue to work on this project when we moved here,” comments Beery, adding “it also expanded the scope of the project and allowed us to maintain active collaboration on what I think is a really important and impactful project.”
As J-WAFS marks its 10th anniversary this year, the program aims to continue supporting and encouraging MIT faculty to pursue innovative projects that aim to advance knowledge and create practical solutions with real-world impacts on global water and food system challenges.
3 Questions: What the laws of physics tell us about CO2 removal
Human activities continue to pump billions of tons of carbon dioxide into the atmosphere each year, raising global temperatures and driving extreme weather events. As countries grapple with climate impacts and ways to significantly reduce carbon emissions, there have been various efforts to advance carbon dioxide removal (CDR) technologies that directly remove carbon dioxide from the air and sequester it for long periods of time.
Unlike carbon capture and storage technologies, which are designed to remove carbon dioxide at point sources such as fossil-fuel plants, CDR aims to remove carbon dioxide molecules that are already circulating in the atmosphere.
A new report by the American Physical Society and led by an MIT physicist provides an overview of the major experimental CDR approaches and determines their fundamental physical limits. The report focuses on methods that have the biggest potential for removing carbon dioxide, at the scale of gigatons per year, which is the magnitude that would be required to have a climate-stabilizing impact.
The new report was commissioned by the American Physical Society's Panel on Public Affairs, and appeared last week in the journal PRX. The report was chaired by MIT professor of physics Washington Taylor, who spoke with MIT News about CDR’s physical limitations and why it’s worth pursuing in tandem with global efforts to reduce carbon emissions.
Q: What motivated you to look at carbon dioxide removal systems from a physical science perspective?
A: The number one thing driving climate change is the fact that we’re taking carbon that has been stuck in the ground for 100 million years, and putting it in the atmosphere, and that’s causing warming. In the last few years there’s been a lot of interest both by the government and private entities in finding technologies to directly remove the CO2 from the air.
How to manage atmospheric carbon is the critical question in dealing with our impact on Earth’s climate. So, it’s very important for us to understand whether we can affect the carbon levels not just by changing our emissions profile but also by directly taking carbon out of the atmosphere. Physics has a lot to say about this because the possibilities are very strongly constrained by thermodynamics, mass issues, and things like that.
Q: What carbon dioxide removal methods did you evaluate?
A: They’re all at an early stage. It's kind of the Wild West out there in terms of the different ways in which companies are proposing to remove carbon from the atmosphere. In this report, we break down CDR processes into two classes: cyclic and once-through.
Imagine we are in a boat that has a hole in the hull and is rapidly taking on water. Of course, we want to plug the hole as quickly as we can. But even once we have fixed the hole, we need to get the water out so we aren't in danger of sinking or getting swamped. And this is particularly urgent if we haven't completely fixed the hole so we still have a slow leak. Now, imagine we have a couple of options for how to get the water out so we don’t sink.
The first is a sponge that we can use to absorb water, that we can then squeeze out and reuse. That’s a cyclic process in the sense that we have some material that we’re using over and over. There are cyclic CDR processes like chemical “direct air capture” (DAC), which acts basically like a sponge. You set up a big system with fans that blow air past some material that captures carbon dioxide. When the material is saturated, you close off the system and then use energy to essentially squeeze out the carbon and store it in a deep repository. Then you can reuse the material, in a cyclic process.
The second class of approaches is what we call “once-through.” In the boat analogy, it would be as if you try to fix the leak using cartons of paper towels. You let them saturate and then throw them overboard, and you use each roll once.
There are once-through CDR approaches, like enhanced rock weathering, that are designed to accelerate a natural process, by which certain rocks, when exposed to air, will absorb carbon from the atmosphere. Worldwide, this natural rock weathering is estimated to remove about 1 gigaton of carbon each year. “Enhanced rock weathering” is a CDR approach where you would dig up a lot of this rock, grind it up really small, to less than the width of a human hair, to get the process to happen much faster. The idea is, you dig up something, spread it out, and absorb CO2 in one go.
The key difference between these two processes is that the cyclic process is subject to the second law of thermodynamics and there’s an energy constraint. You can set an actual limit from physics, saying any cyclic process is going to take a certain amount of energy, and that cannot be avoided. For example, we find that for cyclic direct-air-capture (DAC) plants, based on second law limits, the absolute minimum amount of energy you would need to capture a gigaton of carbon is comparable to the total yearly electric energy consumption of the state of Virginia. Systems currently under development use at least three to 10 times this much energy on a per ton basis (and capture tens of thousands, not billions, of tons). Such systems also need to move a lot of air; the air that would need to pass through a DAC system to capture a gigaton of CO2 is comparable to the amount of air that passes through all the air cooling systems on the planet.
On the other hand, if you have a once-through process, you could in some respects avoid the energy constraint, but now you’ve got a materials constraint due to the central laws of chemistry. For once-through processes like enhanced rock weathering, that means that if you want to capture a gigaton of CO2, roughly speaking, you’re going to need a billion tons of rock.
So, to capture gigatons of carbon through engineered methods requires tremendous amounts of physical material, air movement, and energy. On the other hand, everything we’re doing to put that CO2 in the atmosphere is extensive too, so large-scale emissions reductions face comparable challenges.
Q: What does the report conclude, in terms of whether and how to remove carbon dioxide from the atmosphere?
A: Our initial prejudice was, CDR is just going to take so much energy, and there’s no way around that because of the second law of thermodynamics, regardless of the method.
But as we discussed, there is this nuance about cyclic versus once-through systems. And there are two points of view that we ended up threading a needle between. One is the view that CDR is a silver bullet, and we’ll just do CDR and not worry about emissions — we’ll just suck it all out of the atmosphere. And that’s not the case. It will be really expensive, and will take a lot of energy and materials to do large-scale CDR. But there’s another view, where people say, don’t even think about CDR. Even thinking about CDR will compromise our efforts toward emissions reductions. The report comes down somewhere in the middle, saying that CDR is not a magic bullet, but also not a no-go.
If we are serious about managing climate change, we will likely want substantial CDR in addition to aggressive emissions reductions. The report concludes that research and development on CDR methods should be selectively and prudently pursued despite the expected cost and energy and material requirements.
At a policy level, the main message is that we need an economic and policy framework that incentivizes emissions reductions and CDR in a common framework; this would naturally allow the market to optimize climate solutions. Since in many cases it is much easier and cheaper to cut emissions than it will likely ever be to remove atmospheric carbon, clearly understanding the challenges of CDR should help motivate rapid emissions reductions.
For me, I’m optimistic in the sense that scientifically we understand what it will take to reduce emissions and to use CDR to bring CO2 levels down to a slightly lower level. Now, it’s really a societal and economic problem. I think humanity has the potential to solve these problems. I hope that we can find common ground so that we can take actions as a society that will benefit both humanity and the broader ecosystems on the planet, before we end up having bigger problems than we already have.
Seeking climate connections among the oceans’ smallest organisms
Andrew Babbin tries to pack light for work trips. Along with the travel essentials, though, he also brings a roll each of electrical tape, duct tape, lab tape, a pack of cable ties, and some bungee cords.
“It’s my MacGyver kit: You never know when you have to rig something on the fly in the field or fix a broken bag,” Babbin says.
The trips Babbin takes are far out to sea, on month-long cruises, where he works to sample waters off the Pacific coast and out in the open ocean. In remote locations, repair essentials often come in handy, as when Babbin had to zip-tie a wrench to a sampling device to help it sink through an icy Antarctic lake.
Babbin is an oceanographer and marine biogeochemist who studies marine microbes and the ways in which they control the cycling of nitrogen between the ocean and the atmosphere. This exchange helps maintain healthy ocean ecosystems and supports the ocean’s capacity to store carbon.
By combining measurements that he takes in the ocean with experiments in his MIT lab, Babbin is working to understand the connections between microbes and ocean nitrogen, which could in turn help scientists identify ways to maintain the ocean’s health and productivity. His work has taken him to many coastal and open-ocean regions around the globe.
“You really become an oceanographer and an Earth scientist to see the world,” says Babbin, who recently earned tenure as the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “We embrace the diversity of places and cultures on this planet. To see just a small fraction of that is special.”
A powerful cycle
The ocean has been a constant presence for Babbin since childhood. His family is from Monmouth County, New Jersey, where he and his twin sister grew up playing along the Jersey shore. When they were teenagers, their parents took the kids on family cruise vacations.
“I always loved being on the water,” he says. “My favorite parts of any of those cruises were the days at sea, where you were just in the middle of some ocean basin with water all around you.”
In school, Babbin gravitated to the sciences, and chemistry in particular. After high school, he attended Columbia University, where a visit to the school’s Earth and environmental engineering department catalyzed a realization.
“For me, it was always this excitement about the water and about chemistry, and it was this pop of, ‘Oh wow, it doesn’t have to be one or the other,’” Babbin says.
He chose to major in Earth and environmental engineering, with a concentration in water resources and climate risks. After graduating in 2008, Babbin returned to his home state, where he attended Princeton University and set a course for a PhD in geosciences, with a focus on chemical oceanography and environmental microbiology. His advisor, oceanographer Bess Ward, took Babbin on as a member of her research group and invited him on several month-long cruises to various parts of the eastern tropical Pacific.
“I still remember that first trip,” Babbin recalls. “It was a whirlwind. Everyone else had been to sea a gazillion times and was loading the boat and strapping things down, and I had no idea of anything. And within a few hours, I was doing an experiment as the ship rocked back and forth!”
Babbin learned to deploy sampling cannisters overboard, then haul them back up and analyze the seawater inside for signs of nitrogen — an essential nutrient for all living things on Earth.
As it turns out, the plants and animals that depend on nitrogen to survive are unable to take it up from the atmosphere themselves. They require a sort of go-between, in the form of microbes that “fix” nitrogen, converting it from nitrogen gas to more digestible forms. In the ocean, this nitrogen fixation is done by highly specialized microbial species, which work to make nitrogen available to phytoplankton — microscopic plant-like organisms that are the foundation of the marine food chain. Phytoplankton are also a main route by which the ocean absorbs carbon dioxide from the atmosphere.
Microorganisms may also use these biologically available forms of nitrogen for energy under certain conditions, returning nitrogen to the atmosphere. These microbes can also release a byproduct of nitrous oxide, which is a potent greenhouse gas that also can catalyze ozone loss in the stratosphere.
Through his graduate work, at sea and in the lab, Babbin became fascinated with the cycling of nitrogen and the role that nitrogen-fixing microbes play in supporting the ocean’s ecosystems and the climate overall. A balance of nitrogen inputs and outputs sustains phytoplankton and maintains the ocean’s ability to soak up carbon dioxide.
“Some of the really pressing questions in ocean biogeochemistry pertain to this cycling of nitrogen,” Babbin says. “Understanding the ways in which this one element cycles through the ocean, and how it is central to ecosystem health and the planet’s climate, has been really powerful.”
In the lab and out to sea
After completing his PhD in 2014, Babbin arrived at MIT as a postdoc in the Department of Civil and Environmental Engineering.
“My first feeling when I came here was, wow, this really is a nerd’s playground,” Babbin says. “I embraced being part of a culture where we seek to understand the world better, while also doing the things we really want to do.”
In 2017, he accepted a faculty position in MIT’s Department of Earth, Atmospheric and Planetary Sciences. He set up his laboratory space, painted in his favorite brilliant orange, on the top floor of the Green Building.
His group uses 3D printers to fabricate microfluidic devices in which they reproduce the conditions of the ocean environment and study microbe metabolism and its effects on marine chemistry. In the field, Babbin has led research expeditions to the Galapagos Islands and parts of the eastern Pacific, where he has collected and analyzed samples of air and water for signs of nitrogen transformations and microbial activity. His new measuring station in the Galapagos is able to infer marine emissions of nitrous oxide across a large swath of the eastern tropical Pacific Ocean. His group has also sailed to southern Cuba, where the researchers studied interactions of microbes in coral reefs.
Most recently, Babbin traveled to Antarctica, where he set up camp next to frozen lakes and plumbed for samples of pristine ice water that he will analyze for genetic remnants of ancient microbes. Such preserved bacterial DNA could help scientists understand how microbes evolved and influenced the Earth’s climate over billions of years.
“Microbes are the terraformers,” Babbin notes. “They have been, since life evolved more than 3 billion years ago. We have to think about how they shape the natural world and how they will respond to the Anthropocene as humans monkey with the planet ourselves.”
Collective action
Babbin is now charting new research directions. In addition to his work at sea and in the lab, he is venturing into engineering, with a new project to design denitrifying capsules. While nitrogen is an essential nutrient for maintaining a marine ecosystem, too much nitrogen, such as from fertilizer that runs off into lakes and streams, can generate blooms of toxic algae. Babbin is looking to design eco-friendly capsules that scrub excess anthropogenic nitrogen from local waterways.
He’s also beginning the process of designing a new sensor to measure low-oxygen concentrations in the ocean. As the planet warms, the oceans are losing oxygen, creating “dead zones” where fish cannot survive. While others including Babbin have tried to map these oxygen minimum zones, or OMZs, they have done so sporadically, by dropping sensors into the ocean over limited range, depth, and times. Babbin’s sensors could potentially provide a more complete map of OMZs, as they would be deployed on wide-ranging, deep-diving, and naturally propulsive vehicles: sharks.
“We want to measure oxygen. Sharks need oxygen. And if you look at where the sharks don’t go, you might have a sense of where the oxygen is not,” says Babbin, who is working with marine biologists on ways to tag sharks with oxygen sensors. “A number of these large pelagic fish move up and down the water column frequently, so you can map the depth to which they dive to, and infer something about the behavior. And my suggestion is, you might also infer something about the ocean’s chemistry.”
When he reflects on what stimulates new ideas and research directions, Babbin credits working with others, in his own group and across MIT.
“My best thoughts come from this collective action,” Babbin says. “Particularly because we all have different upbringings and approach things from a different perspective.”
He’s bringing this collaborative spirit to his new role, as a mission director for MIT’s Climate Project. Along with Jesse Kroll, who is a professor of civil and environmental engineering and of chemical engineering, Babbin co-leads one of the project’s six missions: Restoring the Atmosphere, Protecting the Land and Oceans. Babbin and Kroll are planning a number of workshops across campus that they hope will generate new connections, and spark new ideas, particularly around ways to evaluate the effectiveness of different climate mitigation strategies and better assess the impacts of climate on society.
“One area we want to promote is thinking of climate science and climate interventions as two sides of the same coin,” Babbin says. “There’s so much action that’s trying to be catalyzed. But we want it to be the best action. Because we really have one shot at doing this. Time is of the essence.”
David McGee named head of the Department of Earth, Atmospheric and Planetary Sciences
David McGee, the William R. Kenan Jr. Professor of Earth and Planetary Sciences at MIT, was recently appointed head of the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), effective Jan. 15. He assumes the role from Professor Robert van der Hilst, the Schlumberger Professor of Earth and Planetary Sciences, who led the department for 13 years.
McGee specializes in applying isotope geochemistry and geochronology to reconstruct Earth’s climate history, helping to ground-truth our understanding of how the climate system responds during periods of rapid change. He has also been instrumental in the growth of the department’s community and culture, having served as EAPS associate department head since 2020.
“David is an amazing researcher who brings crucial, data-based insights to aid our response to climate change,” says dean of the School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics Nergis Mavalvala. “He is also a committed and caring educator, providing extraordinary investment in his students’ learning experiences, and through his direction of Terrascope, one of our unique first-year learning communities focused on generating solutions to sustainability challenges.”
“I am energized by the incredible EAPS community, by Rob’s leadership over the last 13 years, and by President Kornbluth’s call for MIT to innovate effective and wise responses to climate change,” says McGee. “EAPS has a unique role in this time of reckoning with planetary boundaries — our collective path forward needs to be guided by a deep understanding of the Earth system and a clear sense of our place in the universe.”
McGee’s research seeks to understand the Earth system’s response to past climate changes. Using geochemical analysis and uranium-series dating, McGee and his group investigate stalagmites, ancient lake deposits, and deep-sea sediments from field sites around the world to trace patterns of wind and precipitation, water availability in drylands, and permafrost stability through space and time. Armed with precise chronologies, he aims to shed light on drivers of historical hydroclimatic shifts and provide quantitative tests of climate model performance.
Beyond research, McGee has helped shape numerous Institute initiatives focused on environment, climate, and sustainability, including serving on the MIT Climate and Sustainability Consortium Faculty Steering Committee and the faculty advisory board for the MIT Environment and Sustainability Minor.
McGee also co-chaired MIT's Climate Education Working Group, one of three working groups established under the Institute's Fast Forward climate action plan. The group identified opportunities to strengthen climate- and sustainability-related education at the Institute, from curricular offerings to experiential learning opportunities and beyond.
In April 2023, the working group hosted the MIT Symposium for Advancing Climate Education, featuring talks by McGee and others on how colleges and universities can innovate and help students develop the skills, capacities, and perspectives they’ll need to live, lead, and thrive in a world being remade by the accelerating climate crisis.
“David is reimagining MIT undergraduate education to include meaningful collaborations with communities outside of MIT, teaching students that scientific discovery is important, but not always enough to make impact for society,” says van der Hilst. “He will help shape the future of the department with this vital perspective.”
From the start of his career, McGee has been dedicated to sharing his love of exploration with students. He earned a master’s degree in teaching and spent seven years as a teacher in middle school and high school classrooms before earning his PhD in Earth and environmental sciences from Columbia University. He joined the MIT faculty in 2012, and in 2018 received the Excellence in Mentoring Award from MIT’s Undergraduate Advising and Academic Programming office. In 2015, he became the director of MIT’s Terrascope first-year learning community.
“David's exemplary teaching in Terrascope comes through his understanding that effective solutions must be found where science intersects with community engagement to forge ethical paths forward,” adds van der Hilst. In 2023, for his work with Terrascope, McGee received the school’s highest award, the School of Science Teaching Prize. In 2022, he was named a Margaret MacVicar Faculty Fellow, the highest teaching honor at MIT.
As associate department head, McGee worked alongside van der Hilst and student leaders to promote EAPS community engagement, improve internal supports and reporting structures, and bolster opportunities for students to pursue advanced degrees and STEM careers.
Study in India shows kids use different math skills at work vs. school
In India, many kids who work in retail markets have good math skills: They can quickly perform a range of calculations to complete transactions. But as a new study shows, these kids often perform much worse on the same kinds of problems as they are taught in the classroom. This happens even though many of these students still attend school or attended school through 7th or 8th grades.
Conversely, the study also finds, Indian students who are still enrolled in school and don’t have jobs do better on school-type math problems, but they often fare poorly at the kinds of problems that occur in marketplaces.
Overall, both the “market kids” and the “school kids” struggle with the approach the other group is proficient in, raising questions about how to help both groups learn math more comprehensively.
“For the school kids, they do worse when you go from an abstract problem to a concrete problem,” says MIT economist Esther Duflo, co-author of a new paper detailing the study’s results. “For the market kids, it’s the opposite.”
Indeed, the kids with jobs who are also in school “underperform despite being extraordinarily good at mental math,” says Abhijit Banerjee an MIT economist and another co-author of the paper. “That for me was always the revelation, that the one doesn’t translate into the other.”
The paper, “Children’s arithmetic skills do not transfer between applied and academic math,” is published today in Nature. The authors are Banerjee, the Ford Professor of Economics at MIT; Swati Bhattacharjee of the newspaper Ananda Bazar Patrika, in Kolkata, India; Raghabendra Chattopadhyay of the Indian Institute of Management in Kolkata; Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics at MIT; Alejandro J. Ganimian, a professor of applied psychology and economics at New York University; Kailash Rajaha, a doctoral candidate in economics at MIT; and Elizabeth S. Spelke, a professor of psychology at Harvard University.
Duflo and Banerjee shared the Nobel Prize in Economics in 2019 and are co-founders of MIT’s Jameel Abdul Lateef Poverty Action Lab (J-PAL), a global leader in development economics.
Three experiments
The study consists largely of three data-collection exercises with some embedded experiments. The first one shows that 201 kids working in markets in Kolkata do have good math skills. For instance, a researcher, posing as an ordinary shopper, would ask for the cost of 800 grams of potatoes sold at 20 rupees per kilogram, then ask for the cost of 1.4 kilograms of onions sold at 15 rupees per kilo. They would request the combined answer — 37 rupees — then hand the market worker a 200 rupee note and collect 163 rupees back. All told, the kids working in markets correctly solved this kind of problem from 95 to 98 percent of the time by the second try.
However, when the working children were pulled aside (with their parents’ permission) and given a standardized Indian national math test, just 32 percent could correctly divide a three-digit number by a one-digit number, and just 54 percent could correctly subtract a two-digit number from another two-digit number two times. Clearly, the kids’ skills were not yielding classroom results.
The researchers then conducted a second study with 400 kids working in markets in Delhi, which replicated the results: Working kids had a strong ability to handle market transactions, but only about 15 percent of the ones also in school were at average proficiency in math.
In the second study, the researchers also asked the reverse question: How do students doing well in school fare at market math problems? Here, with 200 students from 17 Delhi schools who do not work in markets, they found that 96 percent of the students could solve typical problems with a pencil, paper, unlimited time, and one opportunity to self-correct. But when the students had to solve the problems in a make-believe “market” setting, that figure dropped to just 60 percent. The students had unlimited time and access to paper and pencil, so that figure may actually overestimate how they would fare in a market.
Finally, in a third study, conducted in Delhi with over 200 kids, the researchers compared the performances of both “market” and “school” kids again on numerous math problems in varying conditions. While 85 percent of the working kids got the right answer to a market transaction problem, only 10 percent of nonworking kids correctly answered a question of similar difficulty, when faced with limited time and with no aids like pencil and paper. However, given the same division and subtraction problems, but with pencil and paper, 59 percent of nonmarket kids got them right, compared to 45 percent of market kids.
To further evaluate market kids and school kids on a level playing field, the researchers then presented each group with a word problem about a boy going to the market and buying two vegetables. Roughly one-third of the market kids were able to solve this without any aid, while fewer than 1 percent of the school kids did.
Why might the performance of the nonworking students decline when given a problem in market conditions?
“They learned an algorithm but didn’t understand it,” Banerjee says.
Meanwhile, the market kids seemed to use certain tactics to handle retail transactions. For one thing, they appear to use rounding well. Take a problem like 43 times 11. To handle that intuitively, you might multiply 43 times 10, and then add 43, for the final answer of 473. This appears to be what they are doing.
“The market kids are able to exploit base 10, so they do better on base 10 problems,” Duflo says. “The school kids have no idea. It makes no difference to them. The market kids may have additional tricks of this sort that we did not see.” On the other hand, the school kids had a better grasp of formal written methods of divison, subtraction, and more.
Going farther in school
The findings raise a significant point about students skills and academic progress. While it is a good thing that the kids with market jobs are proficient at generating rapid answers, it would likely be better for the long-term futures if they also did well in school and wound up with a high school degree or better. Finding a way to cross the divide between informal and formal ways of tackling math problems, then, could notably help some Indian children.
The fact that such a divide exists, meanwhile, suggests some new approaches could be tried in the classroom.
Banerjee, for one, suspects that part of the issue is a classroom process making it seem as if there is only one true route to funding an arithmetic answer. Instead, he believes, following the work of co-author Spelke, that helping students reason their way to an approximation of the right answer can help them truly get a handle on what is needed to solve these types of problems.
Even so, Duflo adds, “We don’t want to blame the teachers. It’s not their fault. They are given a strict curriculum to follow, and strict methods to follow.”
That still leaves open the question of what to change, in concrete classroom terms. That topic, it happens, is something the research group is in the process of weighing, as they consider new experiments that might address it directly. The current finding, however, makes clear progress would be useful.
“These findings highlight the importance of educational curricula that bridge the gap between intuitive and formal mathematics,” the authors state in the paper.
Support for the research was provided, in part, by the Abdul Latif Jameel Poverty Action Lab’s Post-Primary Education Initiative, the Foundation Blaise Pascal, and the AXA Research Fund.
Physicists measure a key aspect of superconductivity in “magic-angle” graphene
Superconducting materials are similar to the carpool lane in a congested interstate. Like commuters who ride together, electrons that pair up can bypass the regular traffic, moving through the material with zero friction.
But just as with carpools, how easily electron pairs can flow depends on a number of conditions, including the density of pairs that are moving through the material. This “superfluid stiffness,” or the ease with which a current of electron pairs can flow, is a key measure of a material’s superconductivity.
Physicists at MIT and Harvard University have now directly measured superfluid stiffness for the first time in “magic-angle” graphene — materials that are made from two or more atomically thin sheets of graphene twisted with respect to each other at just the right angle to enable a host of exceptional properties, including unconventional superconductivity.
This superconductivity makes magic-angle graphene a promising building block for future quantum-computing devices, but exactly how the material superconducts is not well-understood. Knowing the material’s superfluid stiffness will help scientists identify the mechanism of superconductivity in magic-angle graphene.
The team’s measurements suggest that magic-angle graphene’s superconductivity is primarily governed by quantum geometry, which refers to the conceptual “shape” of quantum states that can exist in a given material.
The results, which are reported today in the journal Nature, represent the first time scientists have directly measured superfluid stiffness in a two-dimensional material. To do so, the team developed a new experimental method which can now be used to make similar measurements of other two-dimensional superconducting materials.
“There’s a whole family of 2D superconductors that is waiting to be probed, and we are really just scratching the surface,” says study co-lead author Joel Wang, a research scientist in MIT’s Research Laboratory of Electronics (RLE).
The study’s co-authors from MIT’s main campus and MIT Lincoln Laboratory include co-lead author and former RLE postdoc Miuko Tanaka as well as Thao Dinh, Daniel Rodan-Legrain, Sameia Zaman, Max Hays, Bharath Kannan, Aziza Almanakly, David Kim, Bethany Niedzielski, Kyle Serniak, Mollie Schwartz, Jeffrey Grover, Terry Orlando, Simon Gustavsson, Pablo Jarillo-Herrero, and William D. Oliver, along with Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Magic resonance
Since its first isolation and characterization in 2004, graphene has proven to be a wonder substance of sorts. The material is effectively a single, atom-thin sheet of graphite consisting of a precise, chicken-wire lattice of carbon atoms. This simple configuration can exhibit a host of superlative qualities in terms of graphene’s strength, durability, and ability to conduct electricity and heat.
In 2018, Jarillo-Herrero and colleagues discovered that when two graphene sheets are stacked on top of each other, at a precise “magic” angle, the twisted structure — now known as magic-angle twisted bilayer graphene, or MATBG — exhibits entirely new properties, including superconductivity, in which electrons pair up, rather than repelling each other as they do in everyday materials. These so-called Cooper pairs can form a superfluid, with the potential to superconduct, meaning they could move through a material as an effortless, friction-free current.
“But even though Cooper pairs have no resistance, you have to apply some push, in the form of an electric field, to get the current to move,” Wang explains. “Superfluid stiffness refers to how easy it is to get these particles to move, in order to drive superconductivity.”
Today, scientists can measure superfluid stiffness in superconducting materials through methods that generally involve placing a material in a microwave resonator — a device which has a characteristic resonance frequency at which an electrical signal will oscillate, at microwave frequencies, much like a vibrating violin string. If a superconducting material is placed within a microwave resonator, it can change the device’s resonance frequency, and in particular, its “kinetic inductance,” by an amount that scientists can directly relate to the material’s superfluid stiffness.
However, to date, such approaches have only been compatible with large, thick material samples. The MIT team realized that to measure superfluid stiffness in atomically thin materials like MATBG would require a new approach.
“Compared to MATBG, the typical superconductor that is probed using resonators is 10 to 100 times thicker and larger in area,” Wang says. “We weren’t sure if such a tiny material would generate any measurable inductance at all.”
A captured signal
The challenge to measuring superfluid stiffness in MATBG has to do with attaching the supremely delicate material to the surface of the microwave resonator as seamlessly as possible.
“To make this work, you want to make an ideally lossless — i.e., superconducting — contact between the two materials,” Wang explains. “Otherwise, the microwave signal you send in will be degraded or even just bounce back instead of going into your target material.”
Will Oliver’s group at MIT has been developing techniques to precisely connect extremely delicate, two-dimensional materials, with the goal of building new types of quantum bits for future quantum-computing devices. For their new study, Tanaka, Wang, and their colleagues applied these techniques to seamlessly connect a tiny sample of MATBG to the end of an aluminum microwave resonator. To do so, the group first used conventional methods to assemble MATBG, then sandwiched the structure between two insulating layers of hexagonal boron nitride, to help maintain MATBG’s atomic structure and properties.
“Aluminum is a material we use regularly in our superconducting quantum computing research, for example, aluminum resonators to read out aluminum quantum bits (qubits),” Oliver explains. “So, we thought, why not make most of the resonator from aluminum, which is relatively straightforward for us, and then add a little MATBG to the end of it? It turned out to be a good idea.”
“To contact the MATBG, we etch it very sharply, like cutting through layers of a cake with a very sharp knife,” Wang says. “We expose a side of the freshly-cut MATBG, onto which we then deposit aluminum — the same material as the resonator — to make a good contact and form an aluminum lead.”
The researchers then connected the aluminum leads of the MATBG structure to the larger aluminum microwave resonator. They sent a microwave signal through the resonator and measured the resulting shift in its resonance frequency, from which they could infer the kinetic inductance of the MATBG.
When they converted the measured inductance to a value of superfluid stiffness, however, the researchers found that it was much larger than what conventional theories of superconductivity would have predicted. They had a hunch that the surplus had to do with MATBG’s quantum geometry — the way the quantum states of electrons correlate to one another.
“We saw a tenfold increase in superfluid stiffness compared to conventional expectations, with a temperature dependence consistent with what the theory of quantum geometry predicts,” Tanaka says. “This was a ‘smoking gun’ that pointed to the role of quantum geometry in governing superfluid stiffness in this two-dimensional material.”
“This work represents a great example of how one can use sophisticated quantum technology currently used in quantum circuits to investigate condensed matter systems consisting of strongly interacting particles,” adds Jarillo-Herrero.
This research was funded, in part, by the U.S. Army Research Office, the National Science Foundation, the U.S. Air Force Office of Scientific Research, and the U.S. Under Secretary of Defense for Research and Engineering.
A complementary study on magic-angle twisted trilayer graphene (MATTG), conducted by a collaboration between Philip Kim’s group at Harvard University and Jarillo-Herrero’s group at MIT appears in the same issue of Nature.
Timeless virtues, new technologies
As the story goes, the Scottish inventor James Watt envisioned how steam engines should work on one day in 1765, when he was walking across Glasgow Green, a park in his hometown. Watt realized that putting a separate condenser in an engine would allow its main cylinder to remain hot, making the engine more efficient and compact than the huge steam engines then in existence.
And yet Watt, who had been pondering the problem for a while, needed a partnership with entrepreneur Matthew Boulton to get a practical product to market, starting in 1775 and becoming successful in later years.
“People still use this story of Watt’s ‘Eureka!’ moment, which Watt himself promoted later in his life,” says MIT Professor David Mindell, an engineer and historian of science and engineering. “But it took 20 years of hard labor, during which Watt struggled to support a family and had multiple failures, to get it out in the world. Multiple other inventions were required to achieve what we today call product-market fit.”
The full story of the steam engine, Mindell argues, is a classic case of what is today called “process innovation,” not just “product innovation.” Inventions are rarely fully-formed products, ready to change the world. Mostly, they need a constellation of improvements, and sustained persuasion, to become adopted into industrial systems.
What was true for Watt still holds, as Mindell’s body of work shows. Most technology-driven growth today comes from overlapping advances, when inventors and companies tweak and improve things over time. Now, Mindell explores those ideas in a forthcoming book, “The New Lunar Society: An Enlightenment Guide to the Next Industrial Revolution,” being published on Feb. 24 by the MIT Press. Mindell is professor of aeronautics and astronautics and the Dibner Professor of the History of Engineering and Manufacturing at MIT, where he has also co-founded the Work of the Future initiative.
“We’ve overemphasized product innovation, although we’re very good at it,” Mindell says. “But it’s become apparent that process innovation is just as important: how you improve the making, fixing, rebuilding, or upgrading of systems. These are deeply entangled. Manufacturing is part of process innovation.”
Today, with so many things being positioned as world-changing products, it may be especially important to notice that being adaptive and persistent is practically the essence of improvement.
“Young innovators don’t always realize that when their invention doesn’t work at first, they’re at the start of a process where they have to refine and engage, and find the right partners to grow,” Mindell says.
Manufacturing at home
The title of Mindell’s book refers to British Enlightenment thinkers and inventors — Watt was one of them — who used to meet in a group they called the Lunar Society, centered in Birmingham. This included pottery innovator Josiah Wedgewood; physician Erasmus Darwin; chemist Joseph Priestley; and Boulton, a metal manufacturer whose work and capital helped make Watt’s improved steam engine a reliable product. The book moves between chapters on the old Lunar Society and those on contemporary industrial systems, drawing parallels between then and now.
“The stories about the Lunar Society are models for the way people can go about their careers, engineering or otherwise, in a way they may not see in popular press about technology today,” Mindell says. “Everyone told Wedgwood he couldn’t compete with Chinese porcelain, yet he learned from the Lunar Society and built an English pottery industry that led the world.”
Applying the Lunar Society’s virtues to contemporary industry leads Mindell to a core set of ideas about technology. Research shows that design and manufacturing should be adjacent if possible, not outsourced globally, to accelerate learning and collaboration. The book also argues that technology should address human needs and that venture capital should focus more on industrial systems than it does. (Mindell has co-founded a firm, called Unless, that invests in companies by using venture financing structures better-suited to industrial transformation.)
In seeing a new industrialism taking shape, Mindell suggests that its future includes new ways of working, collaborating, and valuing knowledge throughout organizations, as well as more AI-based open-source tools for small and mid-size manufacturers. He also contends that a new industrialism should include greater emphasis on maintenance and repair work, which are valuable sources of knowledge about industrial devices and systems.
“We’ve undervalued how to keep things running, while simultaneously hollowing out the middle of the workforce,” he says. “And yet, operations and maintenance are sites of product innovation. Ask the person who fixes your car or dishwasher. They’ll tell you the strengths and weaknesses of every model.”
All told, “The sum total of this work, over time, amounts to a new industrialism if it elevates its cultural status into a movement that values the material basis of our lives and seeks to improve it, literally from the ground up,” Mindell writes in the book.
“The book doesn’t predict the future,” he says. “But rather it suggests how to talk about the future of industry with optimism and realism, as opposed to saying, this is the utopian future where machines do everything, and people just sit back in chairs with wires coming out of their heads.”
Work of the Future
“The New Lunar Society” is a concise book with expansive ideas. Mindell also devotes chapters to the convergence of the Industrial-era Enlightenment, the founding of the U.S., and the crucial role of industry in forming the republic.
“The only founding father who signed all of the critical documents in the founding of the country, Benjamin Franklin, was also the person who crystallized the modern science of electricity and deployed its first practical invention, the lightning rod,” Mindell says. “But there were multiple figures, including Thomas Jefferson and Paul Revere, who integrated the industrial Enlightenment with democracy. Industry has been core to American democracy from the beginning.”
Indeed, as Mindell emphasizes in the book, “industry,” beyond evoking smokestacks, has a human meaning: If you are hard-working, you are displaying industry. That meshes with the idea of persistently redeveloping an invention over time.
Despite the high regard Mindell holds for the Industrial Enlightenment, he recognizes that the era’s industrialization brought harsh working conditions, as well as environmental degradation. As one of the co-founders of MIT’s Work of the Future initiative, he argues that 21st-century industrialism needs to rethink some of its fundamentals.
“The ideals of [British] industrialization missed on the environment, and missed on labor,” Mindell says. “So at this point, how do we rethink industrial systems to do better?” Mindell argues that industry must power an economy that grows while decarbonizing.
After all, Mindell adds, “About 70 percent of greenhouse gas emissions are from industrial sectors, and all of the potential solutions involve making lots of new stuff. Even if it’s just connectors and wire. We’re not going to decarbonize or address global supply chain crises by deindustrializing, we’re going to get there by reindustrializing.”
“The New Lunar Society” has received praise from technologists and other scholars. Joel Mokyr, an economic historian at Northwestern University who coined the term “Industrial Enlightenment,” has stated that Mindell “realizes that innovation requires a combination of knowing and making, mind and hand. … He has written a deeply original and insightful book.” Jeff Wilke SM ’93, a former CEO of Amazon’s consumer business, has said the book “argues compellingly that a thriving industrial base, adept at both product and process innovation, underpins a strong democracy.”
Mindell hopes the audience for the book will range from younger technologists to a general audience of anyone interested in the industrial future.
“I think about young people in industrial settings and want to help them see they’re part of a great tradition and are doing important things to change the world,” Mindell says. “There is a huge audience of people who are interested in technology but find overhyped language does not match their aspirations or personal experience. I’m trying to crystallize this new industrialism as a way of imagining and talking about the future.”
Driving innovation, from Silicon Valley to Detroit
Across a career’s worth of pioneering product designs, Doug Field’s work has shaped the experience of anyone who’s ever used a MacBook Air, ridden a Segway, or driven a Tesla Model 3.
But his newest project is his most ambitious yet: reinventing the Ford automobile, one of the past century’s most iconic pieces of technology.
As Ford’s chief electric vehicle (EV), digital, and design officer, Field is tasked with leading the development of the company’s electric vehicles, while making new software platforms central to all Ford models.
To bring Ford Motor Co. into that digital and electric future, Field effectively has to lead a fast-moving startup inside the legacy carmaker. “It is incredibly hard, figuring out how to do ‘startups’ within large organizations,” he concedes.
If anyone can pull it off, it’s likely to be Field. Ever since his time in MIT’s Leaders for Global Operations (then known as “Leaders in Manufacturing”) program studying organizational behavior and strategy, Field has been fixated on creating the conditions that foster innovation.
“The natural state of an organization is to make it harder and harder to do those things: to innovate, to have small teams, to go against the grain,” he says. To overcome those forces, Field has become a master practitioner of the art of curating diverse, talented teams and helping them flourish inside of big, complex companies.
“It’s one thing to make a creative environment where you can come up with big ideas,” he says. “It’s another to create an execution-focused environment to crank things out. I became intrigued with, and have been for the rest of my career, this question of how can you have both work together?”
Three decades after his first stint as a development engineer at Ford Motor Co., Field now has a chance to marry the manufacturing muscle of Ford with the bold approach that helped him rethink Apple’s laptops and craft Tesla’s Model 3 sedan. His task is nothing less than rethinking how cars are made and operated, from the bottom up.
“If it’s only creative or execution, you’re not going to change the world,” he says. “If you want to have a huge impact, you need people to change the course you’re on, and you need people to build it.”
A passion for design
From a young age, Field had a fascination with automobiles. “I was definitely into cars and transportation more generally,” he says. “I thought of cars as the place where technology and art and human design came together — cars were where all my interests intersected.”
With a mother who was an artist and musician and an engineer father, Field credits his parents’ influence for his lifelong interest in both the aesthetic and technical elements of product design. “I think that’s why I’m drawn to autos — there’s very much an aesthetic aspect to the product,” he says.
After earning a degree in mechanical engineering from Purdue University, Field took a job at Ford in 1987. The big Detroit automakers of that era excelled at mass-producing cars, but weren’t necessarily set up to encourage or reward innovative thinking. Field chafed at the “overstructured and bureaucratic” operational culture he encountered.
The experience was frustrating at times, but also valuable and clarifying. He realized that he “wanted to work with fast-moving, technology-based businesses.”
“My interest in advancing technical problem-solving didn’t have a place in the auto industry” at the time, he says. “I knew I wanted to work with passionate people and create something that didn’t exist, in an environment where talent and innovation were prized, where irreverence was an asset and not a liability. When I read about Silicon Valley, I loved the way they talked about things.”
During that time, Field took two years off to enroll in MIT’s LGO program, where he deepened his technical skills and encountered ideas about manufacturing processes and team-driven innovation that would serve him well in the years ahead.
“Some of core skill sets that I developed there were really, really important,” he says, “in the context of production lines and production processes.” He studied systems engineering and the use of Monte Carlo simulations to model complex manufacturing environments. During his internship with aerospace manufacturer Pratt & Whitney, he worked on automated design in computer-aided design (CAD) systems, long before those techniques became standard practice.
Another powerful tool he picked up was the science of probability and statistics, under the tutelage of MIT Professor Alvin Drake in his legendary course 6.041/6.431 (Probabilistic Systems Analysis). Field would go on to apply those insights not only to production processes, but also to characterizing variability in people’s aptitudes, working styles, and talents, in the service of building better, more innovative teams. And studying organizational strategy catalyzed his career-long interest in “ways to look at innovation as an outcome, rather than a random spark of genius.”
“So many things I was lucky to be exposed to at MIT,” Field says, were “all building blocks, pieces of the puzzle, that helped me navigate through difficult situations later on.”
Learning while leading
After leaving Ford in 1993, Field worked at Johnson and Johnson Medical for three years in process development. There, he met Segway inventor Dean Kamen, who was working on a project called the iBOT, a gyroscopic powered wheelchair that could climb stairs.
When Kamen spun off Segway to develop a new personal mobility device using the same technology, Field became his first hire. He spent nearly a decade as the firm’s chief technology officer.
At Segway, Field’s interests in vehicles, technology, innovation, process, and human-centered design all came together.
“When I think about working now on electric cars, it was a real gift,” he says. The problems they tackled prefigured the ones he would grapple with later at Tesla and Ford. “Segway was very much a precursor to a modern EV. Completely software controlled, with higher-voltage batteries, redundant systems, traction control, brushless DC motors — it was basically a miniature Tesla in the year 2000.”
At Segway, Field assembled an “amazing” team of engineers and designers who were as passionate as he was about pushing the envelope. “Segway was the first place I was able to hand-pick every single person I worked with, define the culture, and define the mission.”
As he grew into this leadership role, he became equally engrossed with cracking another puzzle: “How do you prize people who don’t fit in?”
“Such a fundamental part of the fabric of Silicon Valley is the love of embracing talent over a traditional organization’s ways of measuring people,” he says. “If you want to innovate, you need to learn how to manage neurodivergence and a very different set of personalities than the people you find in large corporations.”
Field still keeps the base housing of a Segway in his office, as a reminder of what those kinds of teams — along with obsessive attention to detail — can achieve.
Before joining Apple in 2008, he showed that component, with its clean lines and every minuscule part in its place in one unified package, to his prospective new colleagues. “They were like, “OK, you’re one of us,’” he recalls.
He soon became vice president of hardware development for all Mac computers, leading the teams behind the MacBook Air and MacBook Pro and eventually overseeing more than 2,000 employees. “Making things really simple and really elegant, thinking about the product as an integrated whole, that really took me into Apple.”
The challenge of giving the MacBook Air its signature sleek and light profile is an example.
“The MacBook Air was the first high-volume consumer electronic product built out of a CNC-machined enclosure,” says Field. He worked with industrial design and technology teams to devise a way to make the laptop from one solid piece of aluminum and jettison two-thirds of the parts found in the iMac. “We had material cut away so that every single screw and piece of electronics sat down into it an integrated way. That’s how we got the product so small and slim.”
“When I interviewed with Jony Ive” — Apple’s legendary chief design officer — “he said your ability to zoom out and zoom in was the number one most important ability as a leader at Apple.” That meant zooming out to think about “the entire ethos of this product, and the way it will affect the world” and zooming all the way back in to obsess over, say, the physical shape of the laptop itself and what it feels like in a user’s hands.
“That thread of attention to detail, passion for product, design plus technology rolled directly into what I was doing at Tesla,” he says. When Field joined Tesla in 2013, he was drawn to the way the brash startup upended the approach to making cars. “Tesla was integrating digital technology into cars in a way nobody else was. They said, ‘We’re not a car company in Silicon Valley, we’re a Silicon Valley company and we happen to make cars.’”
Field assembled and led the team that produced the Model 3 sedan, Tesla’s most affordable vehicle, designed to have mass-market appeal.
That experience only reinforced the importance, and power, of zooming in and out as a designer — in a way that encompasses the bigger human resources picture.
“You have to have a broad sense of what you’re trying to accomplish and help people in the organization understand what it means to them,” he says. “You have to go across and understand operations enough to glue all of those (things) together — while still being great at and focused on something very, very deeply. That’s T-shaped leadership.”
He credits his time at LGO with providing the foundation for the “T-shaped leadership” he practices.
“An education like the one I got at MIT allowed me to keep moving that ‘T’, to focus really deep, learn a ton, teach as much as I can, and after something gets more mature, pull out and bed down into other areas where the organization needs to grow or where there’s a crisis.”
The power of marrying scale to a “startup mentality”
In 2018, Field returned to Apple as a vice president for special projects. “I left Tesla after Model 3 and Y started to ramp, as there were people better than me to run high-volume manufacturing,” he says. “I went back to Apple hoping what Tesla had learned would motivate Apple to get into a different market.”
That market was his early love: cars. Field quietly led a project to develop an electric vehicle at Apple for three years.
Then Ford CEO Jim Farley came calling. He persuaded Field to return to Ford in late 2021, partly by demonstrating how much things had changed since his first stint as the carmaker.
“Two things came through loud and clear,” Field says. “One was humility. ‘Our success is not assured.’” That attitude was strikingly different from Field’s early experience in Detroit, encountering managers who were resistant to change. “The other thing was urgency. Jim and Bill Ford said the exact same thing to me: ‘We have four or five years to completely remake this company.’”
“I said, ‘OK, if the top of company really believes that, then the auto industry may be ready for what I hope to offer.’”
So far, Field is energized and encouraged by the appetite for reinvention he’s encountered this time around at Ford.
“If you can combine what Ford does really well with what a Tesla or Rivian can do well, this is something to be reckoned with,” says Field. “Skunk works have become one of the fundamental tools of my career,” he says, using an industry term that describes a project pursued by a small, autonomous group of people within a larger organization.
Ford has been developing a new, lower-cost, software-enabled EV platform — running all of the car’s sensors and components from a central digital operating system — with a “skunk works” team for the past two years. The company plans to build new sedans, SUVs, and small pickups based on this new platform.
With other legacy carmakers like Volvo racing into the electric future and fierce competition from EV leaders Tesla and Rivian, Field and his colleagues have their work cut out for them.
If he succeeds, leveraging his decades of learning and leading from LGO to Silicon Valley, then his latest chapter could transform the way we all drive — and secure a spot for Ford at the front of the electric vehicle pack in the process.
“I’ve been lucky to feel over and over that what I’m doing right now — they are going to write a book about it,” say Field. “This is a big deal, for Ford and the U.S. auto industry, and for American industry, actually.”
How telecommunications cables can image the ground beneath us
When people think about fiber optic cables, its usually about how they’re used for telecommunications and accessing the internet. But fiber optic cables — strands of glass or plastic that allow for the transmission of light — can be used for another purpose: imaging the ground beneath our feet.
MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) PhD student Hilary Chang recently used the MIT fiber optic cable network to successfully image the ground underneath campus using a method known as distributed acoustic sensing (DAS). By using existing infrastructure, DAS can be an efficient and effective way to understand ground composition, a critical component for assessing the seismic hazard of areas, or how at risk they are from earthquake damage.
“We were able to extract very nice, coherent waves from the surroundings, and then use that to get some information about the subsurface,” says Chang, the lead author of a recent paper describing her work that was co-authored with EAPS Principal Research Scientist Nori Nakata.
Dark fibers
The MIT campus fiber optic system, installed from 2000 to 2003, services internal data transport between labs and buildings as well as external transport, such as the campus internet (MITNet). There are three major cable hubs on campus from which lines branch out into buildings and underground, much like a spiderweb.
The network allocates a certain number of strands per building, some of which are “dark fibers,” or cables that are not actively transporting information. Each campus fiber hub has redundant backbone cables between them so that, in the event of a failure, network transmission can switch to the dark fibers without loss of network services.
DAS can use existing telecommunication cables and ambient wavefields to extract information about the materials they pass through, making it a valuable tool for places like cities or the ocean floor, where conventional sensors can’t be deployed. Chang, who studies earthquake waveforms and the information we can extract from them, decided to try it out on the MIT campus.
In order to get access to the fiber optic network for the experiment, Chang reached out to John Morgante, a manager of infrastructure project engineering with MIT Information Systems and Technology (IS&T). Morgante has been at MIT since 1998 and was involved with the original project installing the fiber optic network, and was thus able to provide personal insight into selecting a route.
“It was interesting to listen to what they were trying to accomplish with the testing,” says Morgante. While IS&T has worked with students before on various projects involving the school’s network, he said that “in the physical plant area, this is the first that I can remember that we’ve actually collaborated on an experiment together.”
They decided on a path starting from a hub in Building 24, because it was the longest running path that was entirely underground; above-ground wires that cut through buildings wouldn’t work because they weren’t grounded, and thus were useless for the experiment. The path ran from east to west, beginning in Building 24, traveling under a section of Massachusetts Ave., along parts of Amherst and Vassar streets, and ending at Building W92.
“[Morgante] was really helpful,” says Chang, describing it as “a very good experience working with the campus IT team.”
Locating the cables
After renting an interrogator, a device that sends laser pulses to sense ambient vibrations along the fiber optic cables, Chang and a group of volunteers were given special access to connect it to the hub in Building 24. They let it run for five days.
To validate the route and make sure that the interrogator was working, Chang conducted a tap test, in which she hit the ground with a hammer several times to record the precise GPS coordinates of the cable. Conveniently, the underground route is marked by maintenance hole covers that serve as good locations to do the test. And, because she needed the environment to be as quiet as possible to collect clean data, she had to do it around 2 a.m.
“I was hitting it next to a dorm and someone yelled ‘shut up,’ probably because the hammer blows woke them up,” Chang recalls. “I was sorry.” Thankfully, she only had to tap at a few spots and could interpolate the locations for the rest.
During the day, Chang and her fellow students — Denzel Segbefia, Congcong Yuan, and Jared Bryan — performed an additional test with geophones, another instrument that detects seismic waves, out on Brigg’s Field where the cable passed under it to compare the signals. It was an enjoyable experience for Chang; when the data were collected in 2022, the campus was coming out of pandemic measures, with remote classes sometimes still in place. “It was very nice to have everyone on the field and do something with their hands,” she says.
The noise around us
Once Chang collected the data, she was able to see plenty of environmental activity in the waveforms, including the passing of cars, bikes, and even when the train that runs along the northern edge of campus made its nightly passes.
After identifying the noise sources, Chang and Nakata extracted coherent surface waves from the ambient noises and used the wave speeds associated with different frequencies to understand the properties of the ground the cables passed through. Stiffer materials allow fast velocities, while softer material slows it.
“We found out that the MIT campus is built on soft materials overlaying a relatively hard bedrock,” Chang says, which confirms previously known, albeit lower-resolution, information about the geology of the area that had been collected using seismometers.
Information like this is critical for regions that are susceptible to destructive earthquakes and other seismic hazards, including the Commonwealth of Massachusetts, which has experienced earthquakes as recently as this past week. Areas of Boston and Cambridge characterized by artificial fill during rapid urbanization are especially at risk due to its subsurface structure being more likely to amplify seismic frequencies and damage buildings. This non-intrusive method for site characterization can help ensure that buildings meet code for the correct seismic hazard level.
“Destructive seismic events do happen, and we need to be prepared,” she says.
Mishael Quraishi named 2025 Churchill Scholar
MIT senior Mishael Quraishi has been selected as a 2025-26 Churchill Scholar and will undertake an MPhil in archaeological research at Cambridge University in the U.K. this fall.
Quraishi, who is majoring in material sciences and archeology with a minor in ancient and medieval studies, envisions a future career as a materials scientist, using archeological methods to understand how ancient techniques can be applied to modern problems.
At the Masic Lab at MIT, Quraishi was responsible for studying Egyptian blue, the world’s oldest synthetic pigment, to uncover ancient methods for mass production. Through this research, she secured an internship at the Metropolitan Museum of Art’s Department of Scientific Research, where she characterized pigments on the Amathus sarcophagus. Last fall, she presented her findings to kick off the International Roundtable on Polychromy at the Getty Museum. Quraishi has continued research in the Masic lab and her work on the “Blue Room” of Pompeii was featured on NBC nightly news.
Outside of research, Quraishi has been active in MIT’s makerspace and art communities. She has created engravings and acrylic pourings in the MIT MakerWorkshop, metal sculptures in the MIT Forge, and colored glass rods in the MIT Metropolis makerspace. Quraishi also plays the piano and harp and has sung with the Harvard Summer Chorus and the Handel and Haydn Society. She currently serves as the president of the Society for Undergraduates in Materials Science (SUMS) and captain of the lightweight women’s rowing team that won MIT’s first Division I national championship title in 2022.
“We are delighted that Mishael will have the opportunity to expand her important and interesting research at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships. “Her combination of scientific inquiry, humanistic approach, and creative spirit make her an ideal representative of MIT.”
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, which was established in 1963, honors former British Prime Minister Winston Churchill’s vision of U.S.-U.K. scientific exchange. Since 2017, two additional Kanders Churchill Scholarships have been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Benard in MIT Career Advising and Professional Development.
Aligning AI with human values
Senior Audrey Lorvo is researching AI safety, which seeks to ensure increasingly intelligent AI models are reliable and can benefit humanity. The growing field focuses on technical challenges like robustness and AI alignment with human values, as well as societal concerns like transparency and accountability. Practitioners are also concerned with the potential existential risks associated with increasingly powerful AI tools.
“Ensuring AI isn’t misused or acts contrary to our intentions is increasingly important as we approach artificial general intelligence (AGI),” says Lorvo, a computer science, economics, and data science major. AGI describes the potential of artificial intelligence to match or surpass human cognitive capabilities.
An MIT Schwarzman College of Computing Social and Ethical Responsibilities of Computing (SERC) scholar, Lorvo looks closely at how AI might automate AI research and development processes and practices. A member of the Big Data research group, she’s investigating the social and economic implications associated with AI’s potential to accelerate research on itself and how to effectively communicate these ideas and potential impacts to general audiences including legislators, strategic advisors, and others.
Lorvo emphasizes the need to critically assess AI’s rapid advancements and their implications, ensuring organizations have proper frameworks and strategies in place to address risks. “We need to both ensure humans reap AI’s benefits and that we don’t lose control of the technology,” she says. “We need to do all we can to develop it safely.”
Her participation in efforts like the AI Safety Technical Fellowship reflect her investment in understanding the technical aspects of AI safety. The fellowship provides opportunities to review existing research on aligning AI development with considerations of potential human impact. “The fellowship helped me understand AI safety’s technical questions and challenges so I can potentially propose better AI governance strategies,” she says. According to Lorvo, companies on AI’s frontier continue to push boundaries, which means we’ll need to implement effective policies that prioritize human safety without impeding research.
Value from human engagement
When arriving at MIT, Lorvo knew she wanted to pursue a course of study that would allow her to work at the intersection of science and the humanities. The variety of offerings at the Institute made her choices difficult, however.
“There are so many ways to help advance the quality of life for individuals and communities,” she says, “and MIT offers so many different paths for investigation.”
Beginning with economics — a discipline she enjoys because of its focus on quantifying impact — Lorvo investigated math, political science, and urban planning before choosing Course 6-14.
“Professor Joshua Angrist’s econometrics classes helped me see the value in focusing on economics, while the data science and computer science elements appealed to me because of the growing reach and potential impact of AI,” she says. “We can use these tools to tackle some of the world’s most pressing problems and hopefully overcome serious challenges.”
Lorvo has also pursued concentrations in urban studies and planning and international development.
As she’s narrowed her focus, Lorvo finds she shares an outlook on humanity with other members of the MIT community like the MIT AI Alignment group, from whom she learned quite a bit about AI safety. “Students care about their marginal impact,” she says.
Marginal impact, the additional effect of a specific investment of time, money, or effort, is a way to measure how much a contribution adds to what is already being done, rather than focusing on the total impact. This can potentially influence where people choose to devote their resources, an idea that appeals to Lorvo.
“In a world of limited resources, a data-driven approach to solving some of our biggest challenges can benefit from a tailored approach that directs people to where they’re likely to do the most good,” she says. “If you want to maximize your social impact, reflecting on your career choice’s marginal impact can be very valuable.”
Lorvo also values MIT’s focus on educating the whole student and has taken advantage of opportunities to investigate disciplines like philosophy through MIT Concourse, a program that facilitates dialogue between science and the humanities. Concourse hopes participants gain guidance, clarity, and purpose for scientific, technical, and human pursuits.
Student experiences at the Institute
Lorvo invests her time outside the classroom in creating memorable experiences and fostering relationships with her classmates. “I’m fortunate that there’s space to balance my coursework, research, and club commitments with other activities, like weightlifting and off-campus initiatives,” she says. “There are always so many clubs and events available across the Institute.”
These opportunities to expand her worldview have challenged her beliefs and exposed her to new interest areas that have altered her life and career choices for the better. Lorvo, who is fluent in French, English, Spanish, and Portuguese, also applauds MIT for the international experiences it provides for students.
“I’ve interned in Santiago de Chile and Paris with MISTI and helped test a water vapor condensing chamber that we designed in a fall 2023 D-Lab class in collaboration with the Madagascar Polytechnic School and Tatirano NGO [nongovernmental organization],” she says, “and have enjoyed the opportunities to learn about addressing economic inequality through my International Development and D-Lab classes.”
As president of MIT’s Undergraduate Economics Association, Lorvo connects with other students interested in economics while continuing to expand her understanding of the field. She enjoys the relationships she’s building while also participating in the association’s events throughout the year. “Even as a senior, I’ve found new campus communities to explore and appreciate,” she says. “I encourage other students to continue exploring groups and classes that spark their interests throughout their time at MIT.”
After graduation, Lorvo wants to continue investigating AI safety and researching governance strategies that can help ensure AI’s safe and effective deployment.
“Good governance is essential to AI’s successful development and ensuring humanity can benefit from its transformative potential,” she says. “We must continue to monitor AI’s growth and capabilities as the technology continues to evolve.”
Understanding technology’s potential impacts on humanity, doing good, continually improving, and creating spaces where big ideas can see the light of day continue to drive Lorvo. Merging the humanities with the sciences animates much of what she does. “I always hoped to contribute to improving people’s lives, and AI represents humanity’s greatest challenge and opportunity yet,” she says. “I believe the AI safety field can benefit from people with interdisciplinary experiences like the kind I’ve been fortunate to gain, and I encourage anyone passionate about shaping the future to explore it.”
Eleven MIT faculty receive Presidential Early Career Awards
Eleven MIT faculty, including nine from the School of Engineering and two from the School of Science, were awarded the Presidential Early Career Award for Scientists and Engineers (PECASE). More than 15 additional MIT alumni were also honored.
Established in 1996 by President Bill Clinton, the PECASE is awarded to scientists and engineers “who show exceptional potential for leadership early in their research careers.” The latest recipients were announced by the White House on Jan. 14 under President Joe Biden. Fourteen government agencies recommended researchers for the award.
The MIT faculty and alumni honorees are among 400 scientists and engineers recognized for innovation and scientific contributions. Those from the School of Engineering and School of Science who were honored are:
- Tamara Broderick, associate professor in the Department of Electrical Engineering and Computer Science (EECS), was nominated by the Office of Naval Research for her project advancing “Lightweight representations for decentralized learning in data-rich environments.”
- Michael James Carbin SM ’09, PhD ’15, associate professor in the Department of EECS, was nominated by the National Science Foundation (NSF) for his CAREER award, a project that developed techniques to execute programs reliably on approximate and unreliable computation substrates.
- Christina Delimitrou, the KDD Career Development Professor in Communications and Technology and associate Professor in the Department of EECS, was nominated by the NSF for her group’s work on redesigning the cloud system stack given new cloud programming frameworks like microservices and serverless compute, as well as designing hardware acceleration techniques that make cloud data centers more predictable and resource-efficient.
- Netta Engelhardt, the Biedenharn Career Development Associate Professor of Physics, was nominated by the Department of Energy for her research on the black hole information paradox and its implications for the fundamental quantum structure of space and time.
- Robert Gilliard Jr., the Novartis Associate Professor of Chemistry, was selected based the results generated from his 2020 National Science Foundation CAREER award entitled: "CAREER: Boracycles with Unusual Bonding as Creative Strategies for Main-Group Functional Materials.”
- Heather Janine Kulik PD ’09, PhD ’09, the Lammot du Pont Professor of Chemical Engineering, was nominated by the NSF for her 2019 proposal entitled “CAREER: Revealing spin-state-dependent reactivity in open-shell single atom catalysts with systematically-improvable computational tools.”
- Nuno Loureiro, professor in the Department of Nuclear Science and Engineering, was nominated by the NSF for his work on the generation and amplification of magnetic fields in the universe.
- Robert Macfarlane, associate professor in the Department of Materials Science and Engineering, was nominated by the Department of Defense (DoD)’s Air Force Office of Scientific Research. His research focuses on making new materials using molecular and nanoscale building blocks.
- Ritu Raman, the Eugene Bell Career Development Professor of Tissue Engineering in the Department of Mechanical Engineering, was nominated by the DoD for her ARO-funded research that explored leveraging biological actuators in next-generation robots that can sense and adapt to their environments.
- Ellen Roche, the Latham Family Career Development Professor and associate department head in the Department of Mechanical Engineering, was nominated by the NSF for her CAREER award, a project that aims to create a cutting-edge benchtop model combining soft robotics and organic tissue to accurately simulate the motions of the heart and diaphragm.
- Justin Wilkerson, a visiting associate professor in the Department of Aeronautics and Astronautics, was nominated by the Air Force Office of Scientific Research (AFOSR) for his research primarily related to the design and optimization of novel multifunctional composite materials that can survive extreme environments.
Additional MIT alumni who were honored include: Elaheh Ahmadi ’20, MNG ’21; Ambika Bajpayee MNG ’07, PhD ’15; Katherine Bouman SM ’13, PhD ’17; Walter Cheng-Wan Lee ’95, MNG ’95, PhD ’05; Ismaila Dabo PhD ’08; Ying Diao SM ’10, PhD ’12; Eno Ebong ’99; Soheil Feizi- Khankandi SM ’10, PhD ’16; Mark Finlayson SM ’01, PhD ’12; Chelsea B. Finn ’14; Grace Xiang Gu SM ’14, PhD ’18; David Michael Isaacson PhD ’06, AF ’16; Lewei Lin ’05; Michelle Sander PhD ’12; Kevin Solomon SM ’08, PhD ’12; and Zhiting Tian PhD ’14.
Introducing the MIT Generative AI Impact Consortium
From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
Enter the MIT Generative AI Impact Consortium, a collaboration between industry leaders and MIT’s top minds. As MIT President Sally Kornbluth highlighted last year, the Institute is poised to address the societal impacts of generative AI through bold collaborations. Building on this momentum and established through MIT’s Generative AI Week and impact papers, the consortium aims to harness AI’s transformative power for societal good, tackling challenges before they shape the future in unintended ways.
“Generative AI and large language models [LLMs] are reshaping everything, with applications stretching across diverse sectors,” says Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, who leads the consortium. “As we push forward with newer and more efficient models, MIT is committed to guiding their development and impact on the world.”
Chandrakasan adds that the consortium’s vision is rooted in MIT’s core mission. “I am thrilled and honored to help advance one of President Kornbluth’s strategic priorities around artificial intelligence,” he says. “This initiative is uniquely MIT — it thrives on breaking down barriers, bringing together disciplines, and partnering with industry to create real, lasting impact. The collaborations ahead are something we’re truly excited about.”
Developing the blueprint for generative AI’s next leap
The consortium is guided by three pivotal questions, framed by Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and co-chair of the GenAI Dean’s oversight group, that go beyond AI’s technical capabilities and into its potential to transform industries and lives:
- How can AI-human collaboration create outcomes that neither could achieve alone?
- What is the dynamic between AI systems and human behavior, and how do we maximize the benefits while steering clear of risks?
- How can interdisciplinary research guide the development of better, safer AI technologies that improve human life?
Generative AI continues to advance at lightning speed, but its future depends on building a solid foundation. “Everybody recognizes that large language models will transform entire industries, but there's no strong foundation yet around design principles,” says Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-faculty director of the consortium.
“Now is a perfect time to look at the fundamentals — the building blocks that will make generative AI more effective and safer to use,” adds Kraska.
"What excites me is that this consortium isn’t just academic research for the distant future — we’re working on problems where our timelines align with industry needs, driving meaningful progress in real time," says Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management, and co-faculty director of the consortium.
A “perfect match” of academia and industry
At the heart of the Generative AI Impact Consortium are six founding members: Analog Devices, The Coca-Cola Co., OpenAI, Tata Group, SK Telecom, and TWG Global. Together, they will work hand-in-hand with MIT researchers to accelerate breakthroughs and address industry-shaping problems.
The consortium taps into MIT’s expertise, working across schools and disciplines — led by MIT’s Office of Innovation and Strategy, in collaboration with the MIT Schwarzman College of Computing and all five of MIT’s schools.
“This initiative is the ideal bridge between academia and industry,” says Chandrakasan. “With companies spanning diverse sectors, the consortium brings together real-world challenges, data, and expertise. MIT researchers will dive into these problems to develop cutting-edge models and applications into these different domains.”
Industry partners: Collaborating on AI’s evolution
At the core of the consortium’s mission is collaboration — bringing MIT researchers and industry partners together to unlock generative AI’s potential while ensuring its benefits are felt across society.
Among the founding members is OpenAI, the creator of the generative AI chatbot ChatGPT.
“This type of collaboration between academics, practitioners, and labs is key to ensuring that generative AI evolves in ways that meaningfully benefit society,” says Anna Makanju, vice president of global impact at OpenAI, adding that OpenAI “is eager to work alongside MIT’s Generative AI Consortium to bridge the gap between cutting-edge AI research and the real-world expertise of diverse industries.”
The Coca-Cola Co. recognizes an opportunity to leverage AI innovation on a global scale. “We see a tremendous opportunity to innovate at the speed of AI and, leveraging The Coca-Cola Company's global footprint, make these cutting-edge solutions accessible to everyone,” says Pratik Thakar, global vice president and head of generative AI. “Both MIT and The Coca-Cola Company are deeply committed to innovation, while also placing equal emphasis on the legally and ethically responsible development and use of technology.”
For TWG Global, the consortium offers the ideal environment to share knowledge and drive advancements. “The strength of the consortium is its unique combination of industry leaders and academia, which fosters the exchange of valuable lessons, technological advancements, and access to pioneering research,” says Drew Cukor, head of data and artificial intelligence transformation. Cukor adds that TWG Global “is keen to share its insights and actively engage with leading executives and academics to gain a broader perspective of how others are configuring and adopting AI, which is why we believe in the work of the consortium.”
The Tata Group views the collaboration as a platform to address some of AI’s most pressing challenges. “The consortium enables Tata to collaborate, share knowledge, and collectively shape the future of generative AI, particularly in addressing urgent challenges such as ethical considerations, data privacy, and algorithmic biases,” says Aparna Ganesh, vice president of Tata Sons Ltd.
Similarly, SK Telecom sees its involvement as a launchpad for growth and innovation. Suk-geun (SG) Chung, SK Telecom executive vice president and chief AI global officer, explains, “Joining the consortium presents a significant opportunity for SK Telecom to enhance its AI competitiveness in core business areas, including AI agents, AI semiconductors, data centers (AIDC), and physical AI,” says Chung. “By collaborating with MIT and leveraging the SK AI R&D Center as a technology control tower, we aim to forecast next-generation generative AI technology trends, propose innovative business models, and drive commercialization through academic-industrial collaboration.”
Alan Lee, chief technology officer of Analog Devices (ADI), highlights how the consortium bridges key knowledge gaps for both his company and the industry at large. “ADI can’t hire a world-leading expert in every single corner case, but the consortium will enable us to access top MIT researchers and get them involved in addressing problems we care about, as we also work together with others in the industry towards common goals,” he says.
The consortium will host interactive workshops and discussions to identify and prioritize challenges. “It’s going to be a two-way conversation, with the faculty coming together with industry partners, but also industry partners talking with each other,” says Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research and statistics, who serves alongside Huttenlocher as co-chair of the GenAI Dean’s oversight group.
Preparing for the AI-enabled workforce of the future
With AI poised to disrupt industries and create new opportunities, one of the consortium’s core goals is to guide that change in a way that benefits both businesses and society.
“When the first commercial digital computers were introduced [the UNIVAC was delivered to the U.S. Census Bureau in 1951], people were worried about losing their jobs,” says Kraska. “And yes, jobs like large-scale, manual data entry clerks and human ‘computers,’ people tasked with doing manual calculations, largely disappeared over time. But the people impacted by those first computers were trained to do other jobs.”
The consortium aims to play a key role in preparing the workforce of tomorrow by educating global business leaders and employees on generative AI evolving uses and applications. With the pace of innovation accelerating, leaders face a flood of information and uncertainty.
“When it comes to educating leaders about generative AI, it’s about helping them navigate the complexity of the space right now, because there’s so much hype and hundreds of papers published daily,” says Kraska. “The hard part is understanding which developments could actually have a chance of changing the field and which are just tiny improvements. There's a kind of FOMO [fear of missing out] for leaders that we can help reduce.”
Defining success: Shared goals for generative AI impact
Success within the initiative is defined by shared progress, open innovation, and mutual growth. “Consortium participants recognize, I think, that when I share my ideas with you, and you share your ideas with me, we’re both fundamentally better off,” explains Farias. “Progress on generative AI is not zero-sum, so it makes sense for this to be an open-source initiative.”
While participants may approach success from different angles, they share a common goal of advancing generative AI for broad societal benefit. “There will be many success metrics,” says Perakis. “We’ll educate students, who will be networking with companies. Companies will come together and learn from each other. Business leaders will come to MIT and have discussions that will help all of us, not just the leaders themselves.”
For Analog Devices’ Alan Lee, success is measured in tangible improvements that drive efficiency and product innovation: “For us at ADI, it’s a better, faster quality of experience for our customers, and that could mean better products. It could mean faster design cycles, faster verification cycles, and faster tuning of equipment that we already have or that we’re going to develop for the future. But beyond that, we want to help the world be a better, more efficient place.”
Ganesh highlights success through the lens of real-world application. “Success will also be defined by accelerating AI adoption within Tata companies, generating actionable knowledge that can be applied in real-world scenarios, and delivering significant advantages to our customers and stakeholders,” she says.
Generative AI is no longer confined to isolated research labs — it’s driving innovation across industries and disciplines. At MIT, the technology has become a campus-wide priority, connecting researchers, students, and industry leaders to solve complex challenges and uncover new opportunities. “It's truly an MIT initiative,” says Farias, “one that’s much larger than any individual or department on campus.”
David Darmofal SM ’91, PhD ’93 named vice chancellor for undergraduate and graduate education
David L. Darmofal SM ’91, PhD ’93 will serve as MIT’s next vice chancellor for undergraduate and graduate education, effective Feb. 17. Chancellor Melissa Nobles announced Darmofal’s appointment today in a letter to the MIT community.
Darmofal succeeds Ian A. Waitz, who stepped down in May to become MIT’s vice president for research, and Daniel E. Hastings, who has been serving in an interim capacity.
A creative innovator in research-based teaching and learning, Darmofal is the Jerome C. Hunsaker Professor of Aeronautics and Astronautics. Since 2017, he and his wife Claudia have served as heads of house at The Warehouse, an MIT graduate residence.
“Dave knows the ins and outs of education and student life at MIT in a way that few do,” Nobles says. “He’s a head of house, an alum, and the parent of a graduate. Dave will bring decades of first-hand experience to the role.”
“An MIT education is incredibly special, combining passionate students, staff, and faculty striving to use knowledge and discovery to drive positive change for the world,” says Darmofal. “I am grateful for this opportunity to play a part in supporting MIT’s academic mission.”
Darmofal’s leadership experience includes service from 2008 to 2011 as associate and interim department head in the Department of Aeronautics and Astronautics, overseeing undergraduate and graduate programs. He was the AeroAstro director of digital education from 2020 to 2022, including leading the department’s response to remote learning during the Covid-19 pandemic. He currently serves as director of the MIT Aerospace Computational Science and Engineering Laboratory and is a member of the Center for Computational Science and Engineering (CCSE) in the MIT Stephen A. Schwarzman College of Computing.
As an MIT faculty member and administrator, Darmofal has been involved in designing more flexible degree programs, developing open digital-learning opportunities, creating first-year advising seminars, and enhancing professional and personal development opportunities for students. He also contributed his expertise in engineering pedagogy to the development of the Schwarzman College of Computing’s Common Ground efforts, to address the need for computing education across many disciplines.
“MIT students, staff, and faculty share a common bond as problem solvers. Talk to any of us about an MIT education, and you will get an earful on not only what we need to do better, but also how we can actually do it. The Office of the Vice Chancellor can help bring our community of problem solvers together to enable improvements in our academics,” says Darmofal.
Overseeing the academic arm of the Chancellor’s Office, the vice chancellor’s portfolio is extensive. Darmofal will lead professionals across more than a dozen units, covering areas such as recruitment and admissions, financial aid, student systems, advising, professional and career development, pedagogy, experiential learning, and support for MIT’s more than 100 graduate programs. He will also work collaboratively with many of MIT’s student organizations and groups, including with the leaders of the Undergraduate Association and the Graduate Student Council, and administer the relationship with the graduate student union.
“Dave will be a critical part of my office’s efforts to strengthen and expand critical connections across all areas of student life and learning,” Nobles says. She credits the search advisory group, co-chaired by professors Laurie Boyer and Will Tisdale, in setting the right tenor for such an important role and leading a thorough, inclusive process.
Darmofal’s research is focused on computational methods for partial differential equations, especially fluid dynamics. He earned his SM and PhD degrees in aeronautics and astronautics in 1991 and 1993, respectively, from MIT, and his BS in aerospace engineering in 1989 from the University of Michigan. Prior to joining MIT in 1998, he was an assistant professor in the Department of Aerospace Engineering at Texas A&M University from 1995 to 1998. Currently, he is the chair of AeroAstro’s Undergraduate Committee and the graduate officer for the CCSE PhD program.
“I want to echo something that Dan Hastings said recently,” Darmofal says. “We have a lot to be proud of when it comes to an MIT education. It’s more accessible than it has ever been. It’s innovative, with unmatched learning opportunities here and around the world. It’s home to academic research labs that attract the most talented scholars, creators, experimenters, and engineers. And ultimately, it prepares graduates who do good.”
User-friendly system can help developers build more efficient simulations and AI models
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.
To improve the efficiency of AI models, MIT researchers created an automated system that enables developers of deep learning algorithms to simultaneously take advantage of two types of data redundancy. This reduces the amount of computation, bandwidth, and memory storage needed for machine learning operations.
Existing techniques for optimizing algorithms can be cumbersome and typically only allow developers to capitalize on either sparsity or symmetry — two different types of redundancy that exist in deep learning data structures.
By enabling a developer to build an algorithm from scratch that takes advantage of both redundancies at once, the MIT researchers’ approach boosted the speed of computations by nearly 30 times in some experiments.
Because the system utilizes a user-friendly programming language, it could optimize machine-learning algorithms for a wide range of applications. The system could also help scientists who are not experts in deep learning but want to improve the efficiency of AI algorithms they use to process data. In addition, the system could have applications in scientific computing.
“For a long time, capturing these data redundancies has required a lot of implementation effort. Instead, a scientist can tell our system what they would like to compute in a more abstract way, without telling the system exactly how to compute it,” says Willow Ahrens, an MIT postdoc and co-author of a paper on the system, which will be presented at the International Symposium on Code Generation and Optimization.
She is joined on the paper by lead author Radha Patel ’23, SM ’24 and senior author Saman Amarasinghe, a professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Cutting out computation
In machine learning, data are often represented and manipulated as multidimensional arrays known as tensors. A tensor is like a matrix, which is a rectangular array of values arranged on two axes, rows and columns. But unlike a two-dimensional matrix, a tensor can have many dimensions, or axes, making tensors more difficult to manipulate.
Deep-learning models perform operations on tensors using repeated matrix multiplication and addition — this process is how neural networks learn complex patterns in data. The sheer volume of calculations that must be performed on these multidimensional data structures requires an enormous amount of computation and energy.
But because of the way data in tensors are arranged, engineers can often boost the speed of a neural network by cutting out redundant computations.
For instance, if a tensor represents user review data from an e-commerce site, since not every user reviewed every product, most values in that tensor are likely zero. This type of data redundancy is called sparsity. A model can save time and computation by only storing and operating on non-zero values.
In addition, sometimes a tensor is symmetric, which means the top half and bottom half of the data structure are equal. In this case, the model only needs to operate on one half, reducing the amount of computation. This type of data redundancy is called symmetry.
“But when you try to capture both of these optimizations, the situation becomes quite complex,” Ahrens says.
To simplify the process, she and her collaborators built a new compiler, which is a computer program that translates complex code into a simpler language that can be processed by a machine. Their compiler, called SySTeC, can optimize computations by automatically taking advantage of both sparsity and symmetry in tensors.
They began the process of building SySTeC by identifying three key optimizations they can perform using symmetry.
First, if the algorithm’s output tensor is symmetric, then it only needs to compute one half of it. Second, if the input tensor is symmetric, then algorithm only needs to read one half of it. Finally, if intermediate results of tensor operations are symmetric, the algorithm can skip redundant computations.
Simultaneous optimizations
To use SySTeC, a developer inputs their program and the system automatically optimizes their code for all three types of symmetry. Then the second phase of SySTeC performs additional transformations to only store non-zero data values, optimizing the program for sparsity.
In the end, SySTeC generates ready-to-use code.
“In this way, we get the benefits of both optimizations. And the interesting thing about symmetry is, as your tensor has more dimensions, you can get even more savings on computation,” Ahrens says.
The researchers demonstrated speedups of nearly a factor of 30 with code generated automatically by SySTeC.
Because the system is automated, it could be especially useful in situations where a scientist wants to process data using an algorithm they are writing from scratch.
In the future, the researchers want to integrate SySTeC into existing sparse tensor compiler systems to create a seamless interface for users. In addition, they would like to use it to optimize code for more complicated programs.
This work is funded, in part, by Intel, the National Science Foundation, the Defense Advanced Research Projects Agency, and the Department of Energy.