MIT Latest News

Like human brains, large language models reason about diverse data in a general way
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
MIT researchers probed the inner workings of LLMs to better understand how they process such assorted data, and found evidence that they share some similarities with the human brain.
Neuroscientists believe the human brain has a “semantic hub” in the anterior temporal lobe that integrates semantic information from various modalities, like visual data and tactile inputs. This semantic hub is connected to modality-specific “spokes” that route information to the hub. The MIT researchers found that LLMs use a similar mechanism by abstractly processing data from diverse modalities in a central, generalized way. For instance, a model that has English as its dominant language would rely on English as a central medium to process inputs in Japanese or reason about arithmetic, computer code, etc. Furthermore, the researchers demonstrate that they can intervene in a model’s semantic hub by using text in the model’s dominant language to change its outputs, even when the model is processing data in other languages.
These findings could help scientists train future LLMs that are better able to handle diverse data.
“LLMs are big black boxes. They have achieved very impressive performance, but we have very little knowledge about their internal working mechanisms. I hope this can be an early step to better understand how they work so we can improve upon them and better control them when needed,” says Zhaofeng Wu, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this research.
His co-authors include Xinyan Velocity Yu, a graduate student at the University of Southern California (USC); Dani Yogatama, an associate professor at USC; Jiasen Lu, a research scientist at Apple; and senior author Yoon Kim, an assistant professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Learning Representations.
Integrating diverse data
The researchers based the new study upon prior work which hinted that English-centric LLMs use English to perform reasoning processes on various languages.
Wu and his collaborators expanded this idea, launching an in-depth study into the mechanisms LLMs use to process diverse data.
An LLM, which is composed of many interconnected layers, splits input text into words or sub-words called tokens. The model assigns a representation to each token, which enables it to explore the relationships between tokens and generate the next word in a sequence. In the case of images or audio, these tokens correspond to particular regions of an image or sections of an audio clip.
The researchers found that the model’s initial layers process data in its specific language or modality, like the modality-specific spokes in the human brain. Then, the LLM converts tokens into modality-agnostic representations as it reasons about them throughout its internal layers, akin to how the brain’s semantic hub integrates diverse information.
The model assigns similar representations to inputs with similar meanings, despite their data type, including images, audio, computer code, and arithmetic problems. Even though an image and its text caption are distinct data types, because they share the same meaning, the LLM would assign them similar representations.
For instance, an English-dominant LLM “thinks” about a Chinese-text input in English before generating an output in Chinese. The model has a similar reasoning tendency for non-text inputs like computer code, math problems, or even multimodal data.
To test this hypothesis, the researchers passed a pair of sentences with the same meaning but written in two different languages through the model. They measured how similar the model’s representations were for each sentence.
Then they conducted a second set of experiments where they fed an English-dominant model text in a different language, like Chinese, and measured how similar its internal representation was to English versus Chinese. The researchers conducted similar experiments for other data types.
They consistently found that the model’s representations were similar for sentences with similar meanings. In addition, across many data types, the tokens the model processed in its internal layers were more like English-centric tokens than the input data type.
“A lot of these input data types seem extremely different from language, so we were very surprised that we can probe out English-tokens when the model processes, for example, mathematic or coding expressions,” Wu says.
Leveraging the semantic hub
The researchers think LLMs may learn this semantic hub strategy during training because it is an economical way to process varied data.
“There are thousands of languages out there, but a lot of the knowledge is shared, like commonsense knowledge or factual knowledge. The model doesn’t need to duplicate that knowledge across languages,” Wu says.
The researchers also tried intervening in the model’s internal layers using English text when it was processing other languages. They found that they could predictably change the model outputs, even though those outputs were in other languages.
Scientists could leverage this phenomenon to encourage the model to share as much information as possible across diverse data types, potentially boosting efficiency.
But on the other hand, there could be concepts or knowledge that are not translatable across languages or data types, like culturally specific knowledge. Scientists might want LLMs to have some language-specific processing mechanisms in those cases.
“How do you maximally share whenever possible but also allow languages to have some language-specific processing mechanisms? That could be explored in future work on model architectures,” Wu says.
In addition, researchers could use these insights to improve multilingual models. Often, an English-dominant model that learns to speak another language will lose some of its accuracy in English. A better understanding of an LLM’s semantic hub could help researchers prevent this language interference, he says.
“Understanding how language models process inputs across languages and modalities is a key question in artificial intelligence. This paper makes an interesting connection to neuroscience and shows that the proposed ‘semantic hub hypothesis’ holds in modern language models, where semantically similar representations of different data types are created in the model’s intermediate layers,” says Mor Geva Pipek, an assistant professor in the School of Computer Science at Tel Aviv University, who was not involved with this work. “The hypothesis and experiments nicely tie and extend findings from previous works and could be influential for future research on creating better multimodal models and studying links between them and brain function and cognition in humans.”
This research is funded, in part, by the MIT-IBM Watson AI Lab.
MIT spinout maps the body’s metabolites to uncover the hidden drivers of disease
Biology is never simple. As researchers make strides in reading and editing genes to treat disease, for instance, a growing body of evidence suggests that the proteins and metabolites surrounding those genes can’t be ignored.
The MIT spinout ReviveMed has created a platform for measuring metabolites — products of metabolism like lipids, cholesterol, sugar, and carbs — at scale. The company is using those measurements to uncover why some patients respond to treatments when others don’t and to better understand the drivers of disease.
“Historically, we’ve been able to measure a few hundred metabolites with high accuracy, but that’s a fraction of the metabolites that exist in our bodies,” says ReviveMed CEO Leila Pirhaji PhD ’16, who founded the company with Professor Ernest Fraenkel. “There’s a massive gap between what we’re accurately measuring and what exists in our body, and that’s what we want to tackle. We want to tap into the powerful insights from underutilized metabolite data.”
ReviveMed’s progress comes as the broader medical community is increasingly linking dysregulated metabolites to diseases like cancer, Alzheimer’s, and cardiovascular disease. ReviveMed is using its platform to help some of the largest pharmaceutical companies in the world find patients that stand to benefit from their treatments. It’s also offering software to academic researchers for free to help gain insights from untapped metabolite data.
“With the field of AI booming, we think we can overcome data problems that have limited the study of metabolites,” Pirhaji says. “There’s no foundation model for metabolomics, but we see how these models are changing various fields such as genomics, so we’re starting to pioneer their development.”
Finding a challenge
Pirhaji was born and raised in Iran before coming to MIT in 2010 to pursue her PhD in biological engineering. She had previously read Fraenkel’s research papers and was excited to contribute to the network models he was building, which integrated data from sources like genomes, proteomes, and other molecules.
“We were thinking about the big picture in terms of what you can do when you can measure everything — the genes, the RNA, the proteins, and small molecules like metabolites and lipids,” says Fraenkel, who currently serves on ReviveMed’s board of directors. “We’re probably only able to measure something like 0.1 percent of small molecules in the body. We thought there had to be a way to get as comprehensive a view of those molecules as we have for the other ones. That would allow us to map out all of the changes occurring in the cell, whether it's in the context of cancer or development or degenerative diseases.”
About halfway through her PhD, Pirhaji sent some samples to a collaborator at Harvard University to collect data on the metabolome — the small molecules that are the products of metabolic processes. The collaborator sent Pirhaji back a huge excel sheet with thousands of lines of data — but they told her she’s better off ignoring everything beyond the top 100 rows because they had no idea what the other data meant. She took that as a challenge.
“I started thinking maybe we could use our network models to solve this problem,” Pirhaji recalls. “There was a lot of ambiguity in the data, and it was very interesting to me because no one had tried this before. It seemed like a big gap in the field.”
Pirhaji developed a huge knowledge graph that included millions of interactions between proteins and metabolites. The data was rich but messy — Pirhaji called it a “hair ball” that couldn’t tell researchers anything about disease. To make it more useful, she created a new way to characterize metabolic pathways and features. In a 2016 paper in Nature Methods, she described the system and used it to analyze metabolic changes in a model of Huntington’s disease.
Initially, Pirhaji had no intention of starting a company, but she started realizing the technology’s commercial potential in the final years of her PhD.
“There’s no entrepreneurial culture in Iran,” Pirhaji says. “I didn’t know how to start a company or turn science into a startup, so I leveraged everything MIT offered.”
Pirhaji began taking classes at the MIT Sloan School of Management, including Course 15.371 (Innovation Teams), where she teamed up with classmates to think about how to apply her technology. She also used the MIT Venture Mentoring Service and MIT Sandbox, and took part in the Martin Trust Center for MIT Entrepreneurship’s delta v startup accelerator.
When Pirhaji and Fraenkel officially founded ReviveMed, they worked with MIT’s Technology Licensing Office to access the patents around their work. Pirhaji has since further developed the platform to solve other problems she discovered from talks with hundreds of leaders in pharmaceutical companies.
ReviveMed began by working with hospitals to uncover how lipids are dysregulated in a disease known as metabolic dysfunction-associated steatohepatitis. In 2020, ReviveMed worked with Bristol Myers Squibb to predict how subsets of cancer patients would respond to the company’s immunotherapies.
Since then, ReviveMed has worked with several companies, including four of the top 10 global pharmaceutical companies, to help them understand the metabolic mechanisms behind their treatments. Those insights help identify the patients that stand to benefit the most from different therapies more quickly.
“If we know which patients will benefit from every drug, it would really decrease the complexity and time associated with clinical trials,” Pirhaji says. “Patients will get the right treatments faster.”
Generative models for metabolomics
Earlier this year, ReviveMed collected a dataset based on 20,000 patient blood samples that it used to create digital twins of patients and generative AI models for metabolomics research. ReviveMed is making its generative models available to nonprofit academic researchers, which could accelerate our understanding of how metabolites influence a range of diseases.
“We’re democratizing the use of metabolomic data,” Pirhaji says. “It’s impossible for us to have data from every single patient in the world, but our digital twins can be used to find patients that could benefit from treatments based on their demographics, for instance, by finding patients that could be at risk of cardiovascular disease.”
The work is part of ReviveMed’s mission to create metabolic foundation models that researchers and pharmaceutical companies can use to understand how diseases and treatments change the metabolites of patients.
“Leila solved a lot of really hard problems you face when you’re trying to take an idea out of the lab and turn it into something that’s robust and reproducible enough to be deployed in biomedicine,” Fraenkel says. “Along the way, she also realized the software that she’s developed is incredibly powerful by itself and could be transformational.”
Unlocking the secrets of fusion’s core with AI-enhanced simulations
Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.
Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.
In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.
The biggest and best of what’s never been built
Forty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.
In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.
The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”
After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.
“Just dropped in to see what condition my condition was in”
Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”
The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.
Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”
Viewing the universe through ripples in space
In early September 2015, Salvatore Vitale, who was then a research scientist at MIT, stopped home in Italy for a quick visit with his parents after attending a meeting in Budapest. The meeting had centered on the much-anticipated power-up of Advanced LIGO — a system scientists hoped would finally detect a passing ripple in space-time known as a gravitational wave.
Albert Einstein had predicted the existence of these cosmic reverberations nearly 100 years earlier and thought they would be impossible to measure. But scientists including Vitale believed they might have a shot with their new ripple detector, which was scheduled, finally, to turn on in a few days. At the meeting in Budapest, team members were excited, albeit cautious, acknowledging that it could be months or years before the instruments picked up any promising signs.
However, the day after he arrived for his long-overdue visit with his family, Vitale received a huge surprise.
“The next day, we detect the first gravitational wave, ever,” he remembers. “And of course I had to lock myself in a room and start working on it.”
Vitale and his colleagues had to work in secrecy to prevent the news from getting out before they could scientifically confirm the signal and characterize its source. That meant that no one — not even his parents — could know what he was working on. Vitale departed for MIT and promised that he would come back to visit for Christmas.
“And indeed, I fly back home on the 25th of December, and on the 26th we detect the second gravitational wave! At that point I had to swear them to secrecy and tell them what happened, or they would strike my name from the family record,” he says, only partly in jest.
With the family peace restored, Vitale could focus on the path ahead, which suddenly seemed bright with gravitational discoveries. He and his colleagues, as part of the LIGO Scientific Collaboration, announced the detection of the first gravitational wave in February 2016, confirming Einstein’s prediction. For Vitale, the moment also solidified his professional purpose.
“Had LIGO not detected gravitational waves when it did, I would not be where I am today,” Vitale says. “For sure I was very lucky to be doing this at the right time, for me, and for the instrument and the science.”
A few months after, Vitale joined the MIT faculty as an assistant professor of physics. Today, as a recently tenured associate professor, he is working with his students to analyze a bounty of gravitational signals, from Advanced LIGO as well as Virgo (a similar detector in Italy) and KAGRA, in Japan. The combined power of these observatories is enabling scientists to detect at least one gravitational wave a week, which has revealed a host of extreme sources, from merging black holes to colliding neutron stars.
“Gravitational waves give us a different view of the same universe, which could teach us about things that are very hard to see with just photons,” Vitale says.
Random motion
Vitale is from Reggio di Calabria, a small coastal city in the south of Italy, right at “the tip of the boot,” as he says. His family owned and ran a local grocery store, where he spent so much time as a child that he could recite the names of nearly all the wines in the store.
When he was 9 years old, he remembers stopping in at the local newsstand, which also sold used books. He gathered all the money he had in order to purchase two books, both by Albert Einstein. The first was a collection of letters from the physicist to his friends and family. The second was his theory of relativity.
“I read the letters, and then went through the second book and remember seeing these weird symbols that didn’t mean anything to me,” Vitale recalls.
Nevertheless, the kid was hooked, and continued reading up on physics, and later, quantum mechanics. Toward the end of high school, it wasn’t clear if Vitale could go on to college. Large grocery chains had run his parents’ store out of business, and in the process, the family lost their home and were struggling to recover their losses. But with his parents’ support, Vitale applied and was accepted to the University of Bologna, where he went on to earn a bachelor’s and a master’s in theoretical physics, specializing in general relativity and approximating ways to solve Einstein’s equations. He went on to pursue his PhD in theoretical physics at the Pierre and Marie Curie University in Paris.
“Then, things changed in a very, very random way,” he says.
Vitale’s PhD advisor was hosting a conference, and Vitale volunteered to hand out badges and flyers and help guests get their bearings. That first day, one guest drew his attention.
“I see this guy sitting on the floor, kind of banging his head against his computer because he could not connect his Ubuntu computer to the Wi-Fi, which back then was very common,” Vitale says. “So I tried to help him, and failed miserably, but we started chatting.”
The guest happened to be a professor from Arizona who specialized in analyzing gravitational-wave signals. Over the course of the conference, the two got to know each other, and the professor invited Vitale to Arizona to work with his research group. The unexpected opportunity opened a door to gravitational-wave physics that Vitale might have passed by otherwise.
“When I talk to undergrads and how they can plan their career, I say I don’t know that you can,” Vitale says. “The best you can hope for is a random motion that, overall, goes in the right direction.”
High risk, high reward
Vitale spent two months at Embry-Riddle Aeronautical University in Prescott, Arizona, where he analyzed simulated data of gravitational waves. At that time, around 2009, no one had detected actual signals of gravitational waves. The first iteration of the LIGO detectors began observations in 2002 but had so far come up empty.
“Most of my first few years was working entirely with simulated data because there was no real data in the first place. That led a lot of people to leave the field because it was not an obvious path,” Vitale says.
Nevertheless, the work he did in Arizona only piqued his interest, and Vitale chose to specialize in gravitational-wave physics, returning to Paris to finish up his PhD, then going on to a postdoc position at NIKHEF, the Dutch National Institute for Subatomic Physics at the University of Amsterdam. There, he joined on as a member of the Virgo collaboration, making further connections among the gravitational-wave community.
In 2012, he made the move to Cambridge, Massachusetts, where he started as a postdoc at MIT’s LIGO Laboratory. At that time, scientists there were focused on fine-tuning Advanced LIGO’s detectors and simulating the types of signals that they might pick up. Vitale helped to develop an algorithm to search for signals likely to be gravitational waves.
Just before the detectors turned on for the first observing run, Vitale was promoted to research scientist. And as luck would have it, he was working with MIT students and colleagues on one of the two algorithms that picked up what would later be confirmed to be the first ever gravitational wave.
“It was exciting,” Vitale recalls. “Also, it took us several weeks to convince ourselves that it was real.”
In the whirlwind that followed the official announcement, Vitale became an assistant professor in MIT’s physics department. In 2017, in recognition of the discovery, the Nobel Prize in Physics was awarded to three pivotal members of the LIGO team, including MIT’s Rainier Weiss. Vitale and other members of the LIGO-Virgo collaboration attended the Nobel ceremony later on, in Stockholm, Sweden — a moment that was captured in a photograph displayed proudly in Vitale’s office.
Vitale was promoted to associate professor in 2022 and earned tenure in 2024. Unfortunately his father passed away shortly before the tenure announcement. “He would have been very proud,” Vitale reflects.
Now, in addition to analyzing gravitational-wave signals from LIGO, Virgo, and KAGRA, Vitale is pushing ahead on plans for an even bigger, better LIGO successor. He is part of the Cosmic Explorer Project, which aims to build a gravitational-wave detector that is similar in design to LIGO but 10 times bigger. At that scale, scientists believe such an instrument could pick up signals from sources that are much farther away in space and time, even close to the beginning of the universe.
Then, scientists could look for never-before-detected sources, such as the very first black holes formed in the universe. They could also search within the same neighborhood as LIGO and Virgo, but with higher precision. Then, they might see gravitational signals that Einstein didn’t predict.
“Einstein developed the theory of relativity to explain everything from the motion of Mercury, which circles the sun every 88 days, to objects such as black holes that are 30 times the mass of the sun and move at half the speed of light,” Vitale says. “There’s no reason the same theory should work for both cases, but so far, it seems so, and we’ve found no departure from relativity. But you never know, and you have to keep looking. It’s high risk, for high reward.”
Engineers turn the body’s goo into new glue
Within the animal kingdom, mussels are masters of underwater adhesion. The marine molluscs cluster atop rocks and along the bottoms of ships, and hold fast against the ocean’s waves thanks to a gluey plaque they secrete through their foot. These tenacious adhesive structures have prompted scientists in recent years to design similar bioinspired, waterproof adhesives.
Now engineers from MIT and Freie Universität Berlin have developed a new type of glue that combines the waterproof stickiness of the mussels’ plaques with the germ-proof properties of another natural material: mucus.
Every surface in our bodies not covered in skin is lined with a protective layer of mucus — a slimy network of proteins that acts as a physical barrier against bacteria and other infectious agents. In their new work, the engineers combined sticky, mussel-inspired polymers with mucus-derived proteins, or mucins, to form a gel that strongly adheres to surfaces.
The new mucus-derived glue prevented the buildup of bacteria while keeping its sticky hold, even on wet surfaces. The researchers envision that once the glue’s properties are optimized, it could be applied as a liquid by injection or spray, which would then solidify into a sticky gel. The material might be used to coat medical implants, for example, to prevent infection and bacteria buildup.
The team’s new glue-making approach could also be adjusted to incorporate other natural materials, such as keratin — a fibrous substance found in feathers and hair, with certain chemical features resembling those of mucus.
“The applications of our materials design approach will depend on the specific precursor materials,” says George Degen, a postdoc in MIT’s Department of Mechanical Engineering. “For example, mucus-derived or mucus-inspired materials might be used as multifunctional biomedical adhesives that also prevent infections. Alternatively, applying our approach to keratin might enable development of sustainable packaging materials.”
A paper detailing the team’s results appears this week in the Proceedings of the National Academy of Sciences. Degen’s MIT co-authors include Corey Stevens, Gerardo Cárcamo-Oyarce, Jake Song, Katharina Ribbeck, and Gareth McKinley, along with Raju Bej, Peng Tang, and Rainer Haag of Freie Universität Berlin.
A sticky combination
Before coming to MIT, Degen was a graduate student at the University of California at Santa Barbara, where he worked in a research group that studied the adhesive mechanisms of mussels.
“Mussels are able to deposit materials that adhere to wet surfaces in seconds to minutes,” Degen says. “These natural materials do better than existing commercialized adhesives, specifically at sticking to wet and underwater surfaces, which has been a longstanding technical challenge.”
To stick to a rock or a ship, mussels secrete a protein-rich fluid. Chemical bonds, or cross-links, act as connection points between proteins, enabling the secreted substance to simultaneously solidify into a gel and stick to a wet surface.
As it happens, similar cross-linking features are found in mucin — a large protein that is the primary non-water component of mucus. When Degen came to MIT, he worked with both McKinley, a professor of mechanical engineering and an expert in materials science and fluid flow, and Katharina Ribbeck, a professor of biological engineering and a leader in the study of mucus, to develop a cross-linking glue that would combine the adhesive qualities of mussel plaques with the bacteria-blocking properties of mucus.
Mixing links
The MIT researchers teamed up with Haag and colleagues in Berlin who specialize in synthesizing bioinspired materials. Haag and Ribbeck are members of a collaborative research group that develops dynamic hydrogels for biointerfaces. Haag’s group has made mussel-like adhesives, as well as mucus-inspired liquids by producing microscopic, fiber-like polymers that are similar in structure to the natural mucin proteins.
For their new work, the researchers focused on a chemical motif that appears in mussel adhesives: a bond between two chemical groups known as “catechols” and “thiols.” In the mussel’s natural glue, or plaque, these groups combine to form catechol–thiol cross-links that contribute to the cohesive strength of the plaque. Catechols also enhance a mussel’s adhesion by binding to surfaces such as rocks and ship hulls.
Interestingly, thiol groups are also prevalent in mucin proteins. Degen wondered whether mussel-inspired polymers could link with mucin thiols, enabling the mucins to quickly turn from a liquid to a sticky gel.
To test this idea, he combined solutions of natural mucin proteins with synthetic mussel-inspired polymers and observed how the resulting mixture solidified and stuck to surfaces over time.
“It’s like a two-part epoxy. You combine two liquids together, and chemistry starts to occur so that the liquid solifidies while the substance is simultaneously glueing itself to the surface,” Degen says.
“Depending on how much cross-linking you have, we can control the speed at which the liquids gelate and adhere,” Haag adds. “We can do this all on wet surfaces, at room temperature, and under very mild conditions. This is what is quite unique.”
The team deposited a range of compositions between two surfaces and found that the resulting adhesive held the surfaces together, with forces comparable to the commercial medical adhesives used for bonding tissue. The researchers also tested the adhesive’s bacteria-blocking properties by depositing the gel onto glass surfaces and incubating them with bacteria overnight.
“We found if we had a bare glass surface without our coating, the bacteria formed a thick biofilm, whereas with our coating, biofilms were largely prevented,” Degen notes.
The team says that with a bit of tuning, they can further improve the adhesive’s hold. Then, the material could be a strong and protective alternative to existing medical adhesives.
“We are excited to have established a biomaterials design platform that gives us these desirable properties of gelation and adhesion, and as a starting point we’ve demonstrated some key biomedical applications,” Degen says. “We are now ready to expand into different synthetic and natural systems and target different applications.”
This research was funded, in part, by the U.S. National Institutes of Health, the U.S. National Science Foundation, and the U.S. Army Research Office.
Mixing beats, history, and technology
In a classroom on the third floor of the MIT Media Lab, it’s quiet; the disc jockey is setting up. At the end of a conference table ringed with chairs, there are two turntables on either side of a mixer and a worn crossfader. A MacBook sits to the right of the setup.
Today’s class — CMS.303/803/21M.365 (DJ History, Technique, and Technology) — takes students to the 1970s, which means disco, funk, rhythm and blues, and the breaks that form the foundation of early hip-hop are in the mix. Instructor Philip Tan ’01, SM ’03 starts with a needle drop. Class is about to begin.
Tan is a research scientist with the MIT Game Lab — part of the Institute’s Comparative Media Studies/Writing (CMS/W) program. An accomplished DJ and founder of a DJ crew at MIT, he’s been teaching students classic turntable and mixing techniques since 1998. Tan is also an accomplished game designer whose specialties include digital, live-action, and tabletop games, in both production and management. But today’s focus is on two turntables, a mixer, and music.
“DJ’ing is about using the platter as a music instrument,” Tan says as students begin filing into the classroom, “and creating a program for audiences to enjoy.”
Originally from Singapore, Tan arrived in the United States — first as a high school student in 1993, and later as an MIT student in 1997 — to study the humanities. He brought his passion for DJ culture with him.
“A high school friend in Singapore introduced DJ’ing to me in 1993,” he recalls. “We DJ’d a couple of school dances together and entered the same DJ competitions. Before that, though, I made mix tapes, pausing the cassette recorder while cuing up the next song on cassette, compact disc, or vinyl.”
Later, Tan wondered if his passion could translate into a viable course, exploring the idea over several years. “I wanted to find and connect with other folks on campus who might also be interested in DJ’ing,” he says. During MIT’s Independent Activities Period (IAP) in 2019, he led a four-week “Discotheque” lecture series at the Lewis Music Library, talking about vinyl records, DJ mixers, speakers, and digital audio. He also ran meetups for campus DJs in the MIT Music Production Collaborative.
“We couldn’t really do meetups and in-person performances during the pandemic, but I had the opportunity to offer a spring Experiential Learning Opportunity for MIT undergraduates, focused on DJ’ing over livestreams,” he says. The CMS/W program eventually let Tan expand the IAP course to a full-semester, full-credit course in spring 2023.
Showing students the basics
In the class, students learn the foundational practices necessary for live DJ mixing. They also explore a chosen contemporary or historical dance scene from around the world. The course investigates the evolution of DJ’ing and the technology used to make it possible. Students are asked to write and present their findings to the class based on historical research and interviews; create a mix tape showcasing their research into a historical development in dance music, mixing technique, or DJ technology; and end the semester with a live DJ event for the MIT community. Access to the popular course is granted via lottery.
“From circuits to signal processing, we have been able to see real-life uses of our course subjects in a fun and exciting way,” says Madeline Leano, a second-year student majoring in computer science and engineering and minoring in mathematics. “I’ve also always had a great love for music, and this class has already broadened my music taste as well as widened my appreciation for how music is produced.”
Leano lauded the class’s connections with her work in engineering and computer science. “[Tan] would always emphasize how all the parts of the mixing board work technically, which would come down to different electrical engineering and physics topics,” she notes. “It was super fun to see the overlap of our technical coursework with this class.”
During today’s class, Tan walks students through the evolution of the DJ’s tools, explaining the shifts in DJ’ing as it occurred alongside technological advances by companies producing the equipment. Tan delves into differences in hardware for disco and hip-hop DJs, how certain equipment like the Bozak CMA-10-2DL mixer lacked a crossfader, for example, while the UREI 1620 music mixer was all knobs. Needs changed as the culture changed, Tan explains, and so did the DJ’s tools.
He’s also immersing the class in music and cultural history, discussing the foundations of disco and hip-hop in the early 1970s and the former’s reign throughout the decade while the latter grew alongside it. Club culture for members of the LGBTQ+ community, safe spaces for marginalized groups to dance and express themselves, and previously unheard stories from these folks are carefully excavated and examined at length.
“Studying meter, reviewing music history, and learning new skills”
Toward the end of the class, each student takes their place behind the turntables. They’re searching by feel for the ease with which Tan switches back and forth between two tracks, trying to get the right blend of beats so they don’t lose the crowd. You can see their confidence growing in real time as he patiently walks them through the process: find the groove, move between them, blend the beat. They come to understand that it’s harder than it might appear.
“I’m not looking for students to become expert scratchers,” Tan says. “We’re studying meter, reviewing music history, and learning new skills.”
“Philip is one of the coolest teachers I have had here at MIT!” Leano exclaims. “You can just tell from the way he holds himself in class how both knowledgeable and passionate he is about DJ history and technology.”
Watching Tan demonstrate techniques to students, it’s easy to appreciate the skill and dexterity necessary to both DJ well and to show others how it’s done. He’s steeped in the craft of DJ’ing, as comfortable with two turntables and a mixer as he is with a digital setup favored by DJs from other genres, like electronic dance music. Students, including Leano, note his skill, ability, and commitment.
“Any question that any classmate may have is always answered in such depth he seems like a walking dictionary,” she says. “Not to mention, he makes the class so interactive with us coming to the front and using the board, making sure everyone gets what is happening.”
Body of knowledge
Inside MIT’s Zesiger Sports and Fitness Center, on the springy blue mat of the gymnastics room, an unconventional anatomy lesson unfolded during an October meeting of class STS.024/CMS.524 (Thinking on Your Feet: Dance as a Learning Science).
Supported by a grant from the MIT Center for Art, Science & Technology (CAST), Thinking on Your Feet was developed and offered for the first time in Fall 2024 by Jennifer S. Light, the Bern Dibner Professor of the History of Science and Technology and a professor of Urban Studies and Planning. Light’s vision for the class included a varied lineup of guest instructors. During the last week of October, she handed the reins to Middlebury College Professor Emerita Andrea Olsen, whose expertise bridges dance and science.
Olsen organized the class into small groups. Placing hands on each other’s shoulders conga-line style, participants shuffled across the mat personifying the layers of the nervous system as Olsen had just explained them: the supportive spinal cord and bossy brain of the central nervous system; the sympathetic nervous system responsible for fight-or-flight and its laid-back parasympathetic counterpart; and the literal “gut feelings” of the enteric nervous system. The groups giggled and stumbled as they attempted to stay in character and coordinate their movements.
Unusual as this exercise was, it perfectly suited a class dedicated to movement as a tool for teaching and learning. One of the class’s introductory readings, an excerpt from Annie Murphy Paul’s book “The Extended Mind,” suggests why this was a more effective primer on the nervous system than a standard lecture: “Our memory for what we have heard is remarkably weak. Our memory for what we have done, however — for physical actions we have undertaken — is much more robust.”
Head-to-toe education
Thinking on Your Feet is the third course spun out from Light’s Project on Embodied Education (the other two, developed in collaboration with MIT Director of Physical Education and Wellness Carrie Sampson Moore, examine the history of exercise in relation to schools and medicine, respectively). A historian of science and technology and historian of education for much of her career, Light refocused her scholarship on movement and learning after she’d begun training at Somerville’s Esh Circus Arts to counteract the stress of serving as department head. During her sabbatical a few years later, as part of Esh’s pre-professional program for aspiring acrobats, she took a series of dance classes spanning genres from ballet to hip-hop to Afro modern.
“I started playing with the idea that this is experiential learning — could I bring something like this back to MIT?” she recalls. “There’s a ton of interesting contemporary scientific research on cognition and learning as not just neck-up processes, but whole-body processes.”
Thinking on Your Feet provides an overview of recent scientific studies indicating the surprising extent to which physical activity enhances attention, memory, executive function, and other aspects of mental acuity. Other readings consider dance’s role in the transmission of knowledge throughout human history — from the Native Hawaiian tradition of hula to early forms of ballet in European courts — and describe the ways movement-based instruction can engage underserved populations and neurodiverse learners.
“You can argue for embodied learning on so many dimensions,” says Light. “I want my students to understand that what they’ve been taught about learning is only part of the story, and that contemporary science, ancient wisdom, and non-Western traditions all have a lot to tell us about how we might rethink education to maximize the benefits for all different kinds of students.”
Learning to dance
If you scan the new class’s syllabus, you’re unlikely to miss the word “fun.” It appears twice — bolded, in all caps, and garnished by an exclamation point.
“I’m trying to bring a playful, experimental, ‘you don’t have to be perfect, just be creative’ vibe,” says Light. A dance background is not a prerequisite. The 18 students who registered this fall ranged from experienced dancers to novices.
“I initially took this class just to fulfill my arts requirement,” admits junior physics major Matson Garza, one of the latter group. He was surprised at how much he enjoyed it. “I have an interest in physics education, and I’ve found that beyond introductory courses it’s often lacking intuition. Integrating movement may be one way to solve this problem.”
Similarly, second-year biological engineering major Annabel Tiong found her entry point through an interest in hands-on education, deepened after volunteering with a program that aims to spark curiosity about health-care careers by engaging kids in medical simulations. “While I don’t have an extensive background in dance,” she says, “I was curious how dance, with its free-form and creative nature, could be used to teach STEM topics that appear to be quite concrete and technical.”
To build on each Tuesday’s lectures and discussions, Thursday “lab” sessions focused on overcoming inhibitions, teaching different styles of movement, and connecting dance with academic content. McKersin of Lakaï Arts, a lecturer in dance for the MIT Music and Theater Arts section, led a lab on Haitian harvest dances; Guy Steele PhD ’80 and Clark Baker SM ’80 of the MIT Tech Squares club provided an intro to square dancing and some of its connections to math and programming. Light invited some of her own dance instructors from the circus community, including Johnny Blazes, who specializes (according to their website) in working with “people who have been told implicitly and explicitly that they don’t belong in movement and fitness spaces.” Another, Reba Rosenberg, led the students through basic partner acrobatics that Light says did wonders for the class’s sense of confidence and community.
“Afterwards, several students asked, ‘Could we do this again?’” remembers Light. “None of them thought they could do the thing that by the end of class they were able to do: balance on each other, stand on each other. You can imagine how the need to physically trust someone with your safety yields incredible benefits when we’re back in the classroom.”
Dancing to learn
The culmination of Thinking on Your Feet — a final project constituting 40 percent of students’ grades — required each student to create a dance-based lesson plan on a STEM topic of their choice. Students were exposed throughout the semester to examples of such pedagogy. Olsen’s nervous-system parade was one. Others came courtesy of Lewis Hou of Science Ceilidh, an organization that uses Scottish highland dance to illustrate concepts across the natural and physical sciences, and MIT alumna Yamilée Toussaint ’08, whose nonprofit STEM from Dance helps young women of color create performances with technical components.
As a stepping stone, Light had planned a midterm assignment asking students to adapt existing choreography. But her students surprised her by wanting to jump directly into creating their own dances from scratch. Those first forays weren’t elaborate, but Light was impressed enough by their efforts that she plans to amend the syllabus accordingly.
“One group was doing differential calculus and imagining the floor as a graph,” she recalls, “having dancers think about where they were in relation to each other.” Another group, comprising members of the MIT Ballroom Dance team, choreographed the computer science concept of pipelined processors. “They were giving commands to each other like ‘load’ and ‘execute’ and ‘write back,’” Light says. “The beauty of this is that the students could offer each other feedback on the technical piece of it. Like, ‘OK, I see that you’re trying to explain a clock cycle. Maybe try to do it this way.”
Among the pipelined processing team was senior Kateryna Morhun, a competitive ballroom dancer since age 4 who is earning her degree in artificial intelligence and decision-making. “We wanted to challenge ourselves to teach a specialized, more technical topic that isn’t usually a target of embodied learning initiatives,” Morhun says.
How useful can dance really be in teaching advanced academic content? This was a lively topic of debate among the Thinking on Your Feet cohort. It’s a question Light intends to investigate further with mechanical engineering lecturer Benita Comeau, who audited the class and offered a lab exploring the connections among dance, physics, and martial arts.
“This class sparked many ideas for me, across multiple subject matters and movement styles,” says Comeau. “As an example, the square dance class reminded me of the symmetry groups that are used to describe molecular symmetry in chemistry, and it occurred to me that students could move through symmetry groups and learn about chirality” — a geometric property relevant to numerous branches of science.
For their final presentation, Garza and Tiong’s group tackled substitution mechanisms, a topic from organic chemistry (“notoriously viewed as a very difficult and dreaded class,” according to their write-up). Their lesson plan specified that learners would first need to familiarize themselves with key points through conventional readings and discussion. But then, to bring that material alive, groups of learners representing atoms would take the floor. One, portraying a central carbon atom, would hold out an arm indicating readiness to accept an electron. Another would stand to the side with two balls representing electrons, bonded by a ribbon. Others would rotate in a predetermined order around the central carbon to portray a model’s initial stereochemistry. And so a dance would begin: a three-dimensional, human-scale visualization of a complex chemical process.
The group was asked to summarize what they hoped learners would discover through their dance. “Chemistry is very dynamic!” they wrote. “It’s not mixing chemicals to magically make new ones — it’s a dynamic process of collision, bonding, and molecule-breaking that causes some structures to vanish and others to appear.”
In addition to evaluating the impact of movement in her classes in collaboration with Raechel Soicher from the MIT Teaching + Learning Lab, Light is working on a book about how modern science has rediscovered the ancient wisdom of embodied learning. She hopes her class will kick off a conversation at MIT about incorporating such movement-assisted insights into the educational practices of the future. In fact, she believes MIT’s heritage of innovative pedagogy makes it ripe for these explorations.
As her syllabus puts it: “For all of us, as part of the MIT community, this class invites us to reconsider how our ‘mind and hand’ approach to experiential learning — a product of the 19th century — might be expanded to ‘mind and body’ for the 21st century.”
AI model deciphers the code in proteins that tells them where to go
Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do. More recently, researchers are coming to appreciate that a protein’s localization is also critical for its function. Cells are full of compartments that help to organize their many denizens. Along with the well-known organelles that adorn the pages of biology textbooks, these spaces also include a variety of dynamic, membrane-less compartments that concentrate certain molecules together to perform shared functions. Knowing where a given protein localizes, and who it co-localizes with, can therefore be useful for better understanding that protein and its role in the healthy or diseased cell, but researchers have lacked a systematic way to predict this information.
Meanwhile, protein structure has been studied for over half-a-century, culminating in the artificial intelligence tool AlphaFold, which can predict protein structure from a protein’s amino acid code, the linear string of building blocks within it that folds to create its structure. AlphaFold and models like it have become widely used tools in research.
Proteins also contain regions of amino acids that do not fold into a fixed structure, but are instead important for helping proteins join dynamic compartments in the cell. MIT Professor Richard Young and colleagues wondered whether the code in those regions could be used to predict protein localization in the same way that other regions are used to predict structure. Other researchers have discovered some protein sequences that code for protein localization, and some have begun developing predictive models for protein localization. However, researchers did not know whether a protein’s localization to any dynamic compartment could be predicted based on its sequence, nor did they have a comparable tool to AlphaFold for predicting localization.
Now, Young, also member of the Whitehead Institute for Biological Research; Young lab postdoc Henry Kilgore; Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in MIT's Department of Electrical Engineering and Computer Science and principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL); and colleagues have built such a model, which they call ProtGPS. In a paper published on Feb. 6 in the journal Science, with first authors Kilgore and Barzilay lab graduate students Itamar Chinn, Peter Mikhael, and Ilan Mitnikov, the cross-disciplinary team debuts their model. The researchers show that ProtGPS can predict to which of 12 known types of compartments a protein will localize, as well as whether a disease-associated mutation will change that localization. Additionally, the research team developed a generative algorithm that can design novel proteins to localize to specific compartments.
“My hope is that this is a first step towards a powerful platform that enables people studying proteins to do their research,” Young says, “and that it helps us understand how humans develop into the complex organisms that they are, how mutations disrupt those natural processes, and how to generate therapeutic hypotheses and design drugs to treat dysfunction in a cell.”
The researchers also validated many of the model’s predictions with experimental tests in cells.
“It really excited me to be able to go from computational design all the way to trying these things in the lab,” Barzilay says. “There are a lot of exciting papers in this area of AI, but 99.9 percent of those never get tested in real systems. Thanks to our collaboration with the Young lab, we were able to test, and really learn how well our algorithm is doing.”
Developing the modelThe researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.
Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.
They tested this hypothesis by feeding ProtGOS more than 200,000 proteins with disease-associated mutations, and then asking it to both predict where those mutated proteins would localize and measure how much its prediction changed for a given protein from the normal to the mutated version. A large shift in the prediction indicates a likely change in localization.
The researchers found many cases in which a disease-associated mutation appeared to change a protein’s localization. They tested 20 examples in cells, using fluorescence to compare where in the cell a normal protein and the mutated version of it ended up. The experiments confirmed ProtGPS’s predictions. Altogether, the findings support the researchers’ suspicion that mis-localization may be an underappreciated mechanism of disease, and demonstrate the value of ProtGPS as a tool for understanding disease and identifying new therapeutic avenues.
“The cell is such a complicated system, with so many components and complex networks of interactions,” Mitnikov says. “It’s super interesting to think that with this approach, we can perturb the system, see the outcome of that, and so drive discovery of mechanisms in the cell, or even develop therapeutics based on that.”
The researchers hope that others begin using ProtGPS in the same way that they use predictive structural models like AlphaFold, advancing various projects on protein function, dysfunction, and disease.
Moving beyond prediction to novel generationThe researchers were excited about the possible uses of their prediction model, but they also wanted their model to go beyond predicting localizations of existing proteins, and allow them to design completely new proteins. The goal was for the model to make up entirely new amino acid sequences that, when formed in a cell, would localize to a desired location. Generating a novel protein that can actually accomplish a function — in this case, the function of localizing to a specific cellular compartment — is incredibly difficult. In order to improve their model’s chances of success, the researchers constrained their algorithm to only design proteins like those found in nature. This is an approach commonly used in drug design, for logical reasons; nature has had billions of years to figure out which protein sequences work well and which do not.
Because of the collaboration with the Young lab, the machine learning team was able to test whether their protein generator worked. The model had good results. In one round, it generated 10 proteins intended to localize to the nucleolus. When the researchers tested these proteins in the cell, they found that four of them strongly localized to the nucleolus, and others may have had slight biases toward that location as well.
“The collaboration between our labs has been so generative for all of us,” Mikhael says. “We’ve learned how to speak each other’s languages, in our case learned a lot about how cells work, and by having the chance to experimentally test our model, we’ve been able to figure out what we need to do to actually make the model work, and then make it work better.”
Being able to generate functional proteins in this way could improve researchers’ ability to develop therapies. For example, if a drug must interact with a target that localizes within a certain compartment, then researchers could use this model to design a drug to also localize there. This should make the drug more effective and decrease side effects, since the drug will spend more time engaging with its target and less time interacting with other molecules, causing off-target effects.
The machine learning team members are enthused about the prospect of using what they have learned from this collaboration to design novel proteins with other functions beyond localization, which would expand the possibilities for therapeutic design and other applications.
“A lot of papers show they can design a protein that can be expressed in a cell, but not that the protein has a particular function,” Chinn says. “We actually had functional protein design, and a relatively huge success rate compared to other generative models. That’s really exciting to us, and something we would like to build on.”
All of the researchers involved see ProtGPS as an exciting beginning. They anticipate that their tool will be used to learn more about the roles of localization in protein function and mis-localization in disease. In addition, they are interested in expanding the model’s localization predictions to include more types of compartments, testing more therapeutic hypotheses, and designing increasingly functional proteins for therapies or other applications.
“Now that we know that this protein code for localization exists, and that machine learning models can make sense of that code and even create functional proteins using its logic, that opens up the door for so many potential studies and applications,” Kilgore says.
Engineers enable a drone to determine its position in the dark and indoors
In the future, autonomous drones could be used to shuttle inventory between large warehouses. A drone might fly into a semi-dark structure the size of several football fields, zipping along hundreds of identical aisles before docking at the precise spot where its shipment is needed.
Most of today’s drones would likely struggle to complete this task, since drones typically navigate outdoors using GPS, which doesn’t work in indoor environments. For indoor navigation, some drones employ computer vision or lidar, but both techniques are unreliable in dark environments or rooms with plain walls or repetitive features.
MIT researchers have introduced a new approach that enables a drone to self-localize, or determine its position, in indoor, dark, and low-visibility environments. Self-localization is a key step in autonomous navigation.
The researchers developed a system called MiFly, in which a drone uses radio frequency (RF) waves, reflected by a single tag placed in its environment, to autonomously self-localize.
Because MiFly enables self-localization with only one small tag, which could be affixed to a wall like a sticker, it would be cheaper and easier to implement than systems that require multiple tags. In addition, since the MiFly tag reflects signals sent by the drone, rather than generating its own signal, it can be operated with very low power.
Two off-the-shelf radars mounted on the drone enable it to localize in relation to the tag. Those measurements are fused with data from the drone’s onboard computer, which enables it to estimate its trajectory.
The researchers conducted hundreds of flight experiments with real drones in indoor environments, and found that MiFly consistently localized the drone to within fewer than 7 centimeters.
“As our understanding of perception and computing improves, we often forget about signals that are beyond the visible spectrum. Here, we’ve looked beyond GPS and computer vision to millimeter waves, and by doing so, we’ve opened up new capabilities for drones in indoor environments that were not possible before,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of a paper on MiFly.
Adib is joined on the paper by co-lead authors and research assistants Maisy Lam and Laura Dodds; Aline Eid, a former postdoc who is now an assistant professor at the University of Michigan; and Jimmy Hester, CTO and co-founder of Atheraxon, Inc. The research will be presented at the IEEE Conference on Computer Communications.
Backscattered signals
To enable drones to self-localize within dark, indoor environments, the researchers decided to utilize millimeter wave signals. Millimeter waves, which are commonly used in modern radars and 5G communication systems, work in the dark and can travel through everyday materials like cardboard, plastic, and interior walls.
They set out to create a system that could work with just one tag, so it would be cheaper and easier to implement in commercial environments. To ensure the device remained low power, they designed a backscatter tag that reflects millimeter wave signals sent by a drone’s onboard radar. The drone uses those reflections to self-localize.
But the drone’s radar would receive signals reflected from all over the environment, not just the tag. The researchers surmounted this challenge by employing a technique called modulation. They configured the tag to add a small frequency to the signal it scatters back to the drone.
“Now, the reflections from the surrounding environment come back at one frequency, but the reflections from the tag come back at a different frequency. This allows us to separate the responses and just look at the response from the tag,” Dodds says.
However, with just one tag and one radar, the researchers could only calculate distance measurements. They needed multiple signals to compute the drone’s location.
Rather than using more tags, they added a second radar to the drone, mounting one horizontally and one vertically. The horizontal radar has a horizontal polarization, which means it sends signals horizontally, while the vertical radar would have a vertical polarization.
They incorporated polarization into the tag’s antennas so it could isolate the separate signals sent by each radar.
“Polarized sunglasses receive a certain polarization of light and block out other polarizations. We applied the same concept to millimeter waves,” Lam explains.
In addition, they applied different modulation frequencies to the vertical and horizontal signals, further reducing interference.
Precise location estimation
This dual-polarization and dual-modulation architecture gives the drone’s spatial location. But drones also move at an angle and rotate, so to enable a drone to navigate, it must estimate its position in space with respect to six degrees of freedom — with trajectory data including pitch, yaw, and roll in addition to the usual forward/backward, left/right, and up/down.
“The drone rotation adds a lot of ambiguity to the millimeter wave estimates. This is a big problem because drones rotate quite a bit as they are flying,” Dodds says.
They overcame these challenges by utilizing the drone’s onboard inertial measurement unit, a sensor that measures acceleration as well as changes in altitude and attitude. By fusing this information with the millimeter wave measurements reflected by the tag, they enable MiFly to estimate the full six-degree-of-freedom pose of the drone in only a few milliseconds.
They tested a MiFly-equipped drone in several indoor environments, including their lab, the flight space at MIT, and the dim tunnels beneath the campus buildings. The system achieved high accuracy consistently across all environments, localizing the drone to within 7 centimeters in many experiments.
In addition, the system was nearly as accurate in situations where the tag was blocked from the drone’s view. They achieved reliable localization estimates up to 6 meters from the tag.
That distance could be extended in the future with the use of additional hardware, such as high-power amplifiers, or by improving the radar and antenna design. The researchers also plan to conduct further research by incorporating MiFly into an autonomous navigation system. This could enable a drone to decide where to fly and execute a flight path using millimeter wave technology.
“The infrastructure and localization algorithms we build up for this work are a strong foundation to go on and make them more robust to enable diverse commercial applications,” Lam says.
This research is funded, in part, by the National Science Foundation and the MIT Media Lab.
Study reveals the Phoenix galaxy cluster in the act of extreme cooling
The core of a massive cluster of galaxies appears to be pumping out far more stars than it should. Now researchers at MIT and elsewhere have discovered a key ingredient within the cluster that explains the core’s prolific starburst.
In a new study published in Nature, the scientists report using NASA’s James Webb Space Telescope (JWST) to observe the Phoenix cluster — a sprawling collection of gravitationally bound galaxies that circle a central massive galaxy some 5.8 billion light years from Earth. The cluster is the largest of its kind that scientists have so far observed. For its size and estimated age, the Phoenix should be what astronomers call “red and dead” — long done with any star formation that is characteristic of younger galaxies.
But astronomers previously discovered that the core of the Phoenix cluster appeared surprisingly bright, and the central galaxy seemed to be churning out stars at an extremely vigorous rate. The observations raised a mystery: How was the Phoenix fueling such rapid star formation?
In younger galaxies, the “fuel” for forging stars is in the form of extremely cold and dense clouds of interstellar gas. For the much older Phoenix cluster, it was unclear whether the central galaxy could undergo the extreme cooling of gas that would be required to explain its stellar production, or whether cold gas migrated in from other, younger galaxies.
Now, the MIT team has gained a much clearer view of the cluster’s core, using JWST’s far-reaching, infrared-measuring capabilities. For the first time, they have been able to map regions within the core where there are pockets of “warm” gas. Astronomers have previously seen hints of both very hot gas, and very cold gas, but nothing in between.
The detection of warm gas confirms that the Phoenix cluster is actively cooling and able to generate a huge amount of stellar fuel on its own.
“For the first time we have a complete picture of the hot-to-warm-to-cold phase in star formation, which has really never been observed in any galaxy,” says study lead author Michael Reefe, a physics graduate student in MIT’s Kavli Institute for Astrophysics and Space Research. “There is a halo of this intermediate gas everywhere that we can see.”
“The question now is, why this system?” adds co-author Michael McDonald, associate professor of physics at MIT. “This huge starburst could be something every cluster goes through at some point, but we’re only seeing it happen currently in one cluster. The other possibility is that there’s something divergent about this system, and the Phoenix went down a path that other systems don’t go. That would be interesting to explore.”
Hot and cold
The Phoenix cluster was first spotted in 2010 by astronomers using the South Pole Telescope in Antarctica. The cluster comprises about 1,000 galaxies and lies in the constellation Phoenix, after which it is named. Two years later, McDonald led an effort to focus in on Phoenix using multiple telescopes, and discovered that the cluster’s central galaxy was extremely bright. The unexpected luminosity was due to a firehose of star formation. He and his colleagues estimated that this central galaxy was turning out stars at a staggering rate of about 1,000 per year.
“Previous to the Phoenix, the most star-forming galaxy cluster in the universe had about 100 stars per year, and even that was an outlier. The typical number is one-ish,” McDonald says. “The Phoenix is really offset from the rest of the population.”
Since that discovery, scientists have checked in on the cluster from time to time for clues to explain the abnormally high stellar production. They have observed pockets of both ultrahot gas, of about 1 million degrees Fahrenheit, and regions of extremely cold gas, of 10 kelvins, or 10 degrees above absolute zero.
The presence of very hot gas is no surprise: Most massive galaxies, young and old, host black holes at their cores that emit jets of extremely energetic particles that can continually heat up the galaxy’s gas and dust throughout a galaxy’s lifetime. Only in a galaxy’s early stages does some of this million-degree gas cool dramatically to ultracold temperatures that can then form stars. For the Phoenix cluster’s central galaxy, which should be well past the stage of extreme cooling, the presence of ultracold gas presented a puzzle.
“The question has been: Where did this cold gas come from?” McDonald says. “It’s not a given that hot gas will ever cool, because there could be black hole or supernova feedback. So, there are a few viable options, the simplest being that this cold gas was flung into the center from other nearby galaxies. The other is that this gas somehow is directly cooling from the hot gas in the core.”
Neon signs
For their new study, the researchers worked under a key assumption: If the Phoenix cluster’s cold, star-forming gas is coming from within the central galaxy, rather than from the surrounding galaxies, the central galaxy should have not only pockets of hot and cold gas, but also gas that’s in a “warm” in-between phase. Detecting such intermediate gas would be like catching the gas in the midst of extreme cooling, serving as proof that the core of the cluster was indeed the source of the cold stellar fuel.
Following this reasoning, the team sought to detect any warm gas within the Phoenix core. They looked for gas that was somewhere between 10 kelvins and 1 million kelvins. To search for this Goldilocks gas in a system that is 5.8 billion light years away, the researchers looked to JWST, which is capable of observing farther and more clearly than any observatory to date.
The team used the Medium-Resolution Spectrometer on JWST’s Mid-Infrared Instrument (MIRI), which enables scientists to map light in the infrared spectrum. In July of 2023, the team focused the instrument on the Phoenix core and collected 12 hours’ worth of infrared images. They looked for a specific wavelength that is emitted when gas — specifically neon gas — undergoes a certain loss of ions. This transition occurs at around 300,000 kelvins, or 540,000 degrees Fahrenheit — a temperature that happens to be within the “warm” range that the researchers looked to detect and map. The team analyzed the images and mapped the locations where warm gas was observed within the central galaxy.
“This 300,000-degree gas is like a neon sign that’s glowing in a specific wavelength of light, and we could see clumps and filaments of it throughout our entire field of view,” Reefe says. “You could see it everywhere.”
Based on the extent of warm gas in the core, the team estimates that the central galaxy is undergoing a huge degree of extreme cooling and is generating an amount of ultracold gas each year that is equal to the mass of about 20,000 suns. With that kind of stellar fuel supply, the team says it’s very likely that the central galaxy is indeed generating its own starburst, rather than using fuel from surrounding galaxies.
“I think we understand pretty completely what is going on, in terms of what is generating all these stars,” McDonald says. “We don’t understand why. But this new work has opened a new way to observe these systems and understand them better.”
This work was funded, in part, by NASA.
Pivot Bio is using microbial nitrogen to make agriculture more sustainable
The Haber-Bosch process, which converts atmospheric nitrogen to make ammonia fertilizer, revolutionized agriculture and helped feed the world’s growing population, but it also created huge environmental problems. It is one of the most energy-intensive chemical processes in the world, responsible for 1-2 percent of global energy consumption. It also releases nitrous oxide, a potent greenhouse gas that harms the ozone layer. Excess nitrogen also routinely runs off farms into waterways, harming marine life and polluting groundwater.
In place of synthetic fertilizer, Pivot Bio has engineered nitrogen-producing microbes to make farming more sustainable. The company, which was co-founded by Professor Chris Voigt, Karsten Temme, and Alvin Tamsir, has engineered its microbes to grow on plant roots, where they feed on the root’s sugars and precisely deliver nitrogen in return.
Pivot’s microbial colonies grow with the plant and produce more nitrogen at exactly the time the plant needs it, minimizing nitrogen runoff.
“The way we have delivered nutrients to support plant growth historically is fertilizer, but that’s an inefficient way to get all the nutrients you need,” says Temme, Pivot’s chief innovation officer. “We have the ability now to help farmers be more efficient and productive with microbes.”
Farmers can replace up to 40 pounds per acre of traditional nitrogen with Pivot’s product, which amounts to about a quarter of the total nitrogen needed for a crop like corn.
Pivot’s products are already being used to grow corn, wheat, barley, oats, and other grains across millions of acres of American farmland, eliminating hundreds of thousands of tons of CO2 equivalent in the process. The company’s impact is even more striking given its unlikely origins, which trace back to one of the most challenging times of Voigt’s career.
A Pivot from despair
The beginning of every faculty member’s career can be a sink-or-swim moment, and by Voigt’s own account, he was drowning. As a freshly minted assistant professor at the University of California at San Francisco, Voigt was struggling to stand up his lab, attract funding, and get experiments started.
Around 2008, Voigt joined a research group out of the University of California at Berkeley that was writing a grant proposal focused on photovoltaic materials. His initial role was minor, but a senior researcher pulled out of the group a week before the proposal had to be submitted, so Voigt stepped up.
“I said ‘I’ll finish this section in a week,’” Voigt recalls. “It was my big chance.”
For the proposal, Voigt detailed an ambitious plan to rearrange the genetics of biologic photosynthetic systems to make them more efficient. He barely submitted it in time.
A few months went by, then the proposal reviews finally came back. Voigt hurried to the meeting with some of the most senior researchers at UC Berkeley to discuss the responses.
“My part of the proposal got completely slammed,” Voigt says. “There were something like 15 reviews on it — they were longer than the actual grant — and it’s just one after another tearing into my proposal. All the most famous people are in this meeting, future energy secretaries, future leaders of the university, and it was totally embarrassing. After that meeting, I was considering leaving academia.”
A few discouraging months later, Voigt got a call from Paul Ludden, the dean of the School of Science at UC Berkeley. He wanted to talk.
“As I walk into Paul’s office, he’s reading my proposal,” Voigt recalls. “He sits me down and says, ‘Everybody’s telling me how terrible this is.’ I’m thinking, ‘Oh my God.’ But then he says, ‘I think there’s something here. Your idea is good, you just picked the wrong system.’”
Ludden went on to explain to Voigt that he should apply his gene-swapping idea to nitrogen fixation. He even offered to send Voigt a postdoc from his lab, Dehua Zhao, to help. Voigt paired Zhao with Temme, and sure enough, the resulting 2011 paper of their work was well-received by the nitrogen fixation community.
“Nitrogen fixation has been a holy grail for scientists, agronomists, and farmers for almost a century, ever since somebody discovered the first microbe that can fix nitrogen for legumes like soybeans,” Temme says. “Everybody always said that someday we'll be able to do this for the cereal crops. The excitement with Pivot was this is the first time that technology became accessible.”
Voigt had moved to MIT in 2010. When the paper came out, he founded Pivot Bio with Temme and another Berkeley researcher, Alvin Tamsir. Since then, Voigt, who is the Daniel I.C. Wang Professor at MIT and the head of the Department of Biological Engineering, has continued collaborating with Pivot on things like increasing nitrogen production, making strains more stable, and making them inducible to different signals from the plant. Pivot has licensed technology from MIT, and the research has also received support from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).
Pivot’s first goals were to gain regulatory approval and prove themselves in the marketplace. To gain approval in the U.S., Pivot’s team focused on using DNA from within the same organism rather than bringing in totally new DNA, which simplified the approval process. It also partnered with independent corn seed dealers to get its product to farms. Early deployments occurred in 2019.
Farmers apply Pivot’s product at planting, either as a liquid that gets sprayed on the soil or as a dry powder that is rehydrated and applied to the seeds as a coating. The microbes live on the surface of the growing root system, eating plant sugars and releasing nitrogen throughout the plant’s life cycle.
“Today, our microbes colonize just a fraction of the total sugars provided by the plant,” Temme explains. “They’re also sharing ammonia with the plant, and all of those things are just a portion of what’s possible technically. Our team is always trying to figure out how to make those microbes more efficient at getting the energy they need to grow or at fixing nitrogen and sharing it with the crop.”
In 2023, Pivot started the N-Ovator program to connect companies with growers who practice sustainable farming using Pivot’s microbial nitrogen. Through the program, companies buy nitrogen credits and farmers can get paid by verifying their practices. The program was named one of the Inventions of the Year by Time Magazine last year and has paid out millions of dollars to farmers to date.
Microbial nitrogen and beyond
Pivot is currently selling to farmers across the U.S. and working with smallholder farmers in Kenya. It’s also hoping to gain approval for its microbial solution in Brazil and Canada, which it hopes will be its next markets.
"How do we get the economics to make sense for everybody — the farmers, our partners, and the company?” Temme says of Pivot’s mission. “Because this truly can be a deflationary technology that upends the very expensive traditional way of making fertilizer.”
Pivot’s team is also extending the product to cotton, and Temme says microbes can be a nitrogen source for any type of plant on the planet. Further down the line, the company believes it can help farmers with other nutrients essential to help their crops grow.
“Now that we’ve established our technology, how can Pivot help farmers overcome all the other limitations they face with crop nutrients to maximize yields?” Temme asks. “That really starts to change the way a farmer thinks about managing the entire acre from a price, productivity, and sustainability perspective.”
Cultivators of research
“Intelligent, caring, inspiring, and full-of-wisdom,” one student described Kenneth Oye. Another lauded that “We are beyond lucky to have such a caring, supportive, empathetic and compassionate leader” in Maria Yang.
Professors Maria Yang and Kenneth Oye are two of the 2023-25 Committed to Caring cohort, acknowledged for encouraging their students; advocating for meaningful, interesting research; and participating in their research journey from the beginning to end. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Maria Yang: Inclusion and continual fostering
Professor Maria Yang is the deputy dean of engineering, Kendall Rohsenow Professor, and professor of mechanical engineering. She works in the area of design theory, with a focus on the early stages of the process of design. Her current research interests include the hybrid ways in which humans and AI can collaborate during the design process and also ways in which we can design products to encourage users to behave more sustainably.
Yang has been selected as a recipient of the Committed to Caring award for 2023-25. She is known for her inclusive, interdisciplinary work as well as her continuous fostering of students.
Yang founded and leads the Ideation Laboratory at MIT, which is characterized by interdisciplinary work in design, including product design, engineering design, and system design. Students may not feel like they “fit” in their traditional department, but find a home in the Ideation Laboratory. In Yang’s words, her students “collaborate and connect from their shared experiences.”
Yang is one of the mentors of a student-led research project that works toward understanding how users, and other stakeholders who are traditionally not considered, are embedded in design education and practice, and how to support deeper engagement with such users and stakeholders. Yang supported her students on this project in multiple ways, providing mentorship and feedback as well as supporting her students to apply for grants to continue growing the project.
The students and Yang held a first-ever summit as a part of this project. The summit brought together faculty and students from MIT as well as other universities and companies. All the summit stakeholders are working to support instructors in thoughtfully considering users and stakeholders in their courses, and are striving to create a community for students and instructors engaged in this space.
“Maria will never take credit for the outcomes of the project, giving all the credit to other members of the project team,” the nominator wrote, “but she has been instrumental in supporting us and encouraging us to continue.”
Yang continued to be a supportive and caring mentor, championing and supporting students’ work. When one nominator was still a prospective student, Yang met with them in support of their application. When the student was eventually admitted into the Media Lab rather than mechanical engineering, Yang welcomed the student into her research group.
As the student’s career evolved, Yang became a member of their thesis committee and provided letters of recommendation for their academic job search. The nominator turned to Maria for advice on how to strategize what applications they would submit and which departments were the best fit for them.
Yang took time to sit with the student, practiced their presentation with them, and gave support where the student was lacking confidence. All in all, Yang helped them have the strength to continue to achieve their goals, ultimately enabling them to earn their PhD.
The nominator was grateful for the crucial role Maria played in fostering their growth: “My MIT experience would have been very different without Maria.”
Kenneth Oye: Inspiring advisor and caring mentor
Oye is a professor of political science and data systems and society as well as the director of the Program on Emerging Technologies (PoET). His work revolves around international relations, political economy, and technology policy. His current work in technology policy centers on adaptive management of risks associated with synthetic biology and pharmaceuticals and on equity in health policy.
Oye has been selected as a recipient of the Committed to Caring award for 2023-25. He is a highly effective instructor, influential advisor, and considerate mentor.
Oye teaches with clear and easy-to-follow language filled with personal stories and rich experiences. His lectures are interactive and engaging so that learners can truly internalize the material. His students gain understanding with curiosity and intent.
A nominator wrote that Oye encourages his students to investigate broadly. Oye offers frequent advice on improvements in research design and shared analysis techniques. “He acknowledged my effort and ideas,” the nominator shared, “but also always encouraged me to explore further.”
The student added that parts of their dissertation were challenging, but Oye transformed it into an enjoyable intellectual quest.
Oye emphasized that he cares about the work that is produced; however, he equally attends to his students as individuals. He consistently starts weekly meetings with check-ins and concerns himself with each of his student’s well-being and personal development.
Students feel comfortable coming to Oye when they need to share their strife and seek counsel. Their mentoring relationship had built such trust, one nominator remarked, that when the student faced some personal challenges, “Ken was the first person I thought of that I could share my struggles with safely and ask for advice.”
As an instructor, an advisor, and as a mentor, Oye has helped his students learn and grow beyond the classroom.
One of his students wrote, “Oye’s truly a gem to learn from and work with, and I believe he has been a great asset to MIT’s generations of students.”
MIT engineers develop a fully 3D-printed electrospray engine
An electrospray engine applies an electric field to a conductive liquid, generating a high-speed jet of tiny droplets that can propel a spacecraft. These miniature engines are ideal for small satellites called CubeSats that are often used in academic research.
Since electrospray engines utilize propellant more efficiently than the powerful, chemical rockets used on the launchpad, they are better suited for precise, in-orbit maneuvers. The thrust generated by an electrospray emitter is tiny, so electrospray engines typically use an array of emitters that are uniformly operated in parallel.
However, these multiplexed electrospray thrusters are typically made via expensive and time-consuming semiconductor cleanroom fabrication, which limits who can manufacture them and how the devices can be applied.
To help break down barriers to space research, MIT engineers have demonstrated the first fully 3D-printed, droplet-emitting electrospray engine. Their device, which can be produced rapidly and for a fraction of the cost of traditional thrusters, uses commercially accessible 3D printing materials and techniques. The devices could even be fully made in orbit, as 3D printing is compatible with in-space manufacturing.
By developing a modular process that combines two 3D printing methods, the researchers overcame the challenges involved in fabricating a complex device comprised of macroscale and microscale components that must work together seamlessly.
Their proof-of-concept thruster comprises 32 electrospray emitters that operate together, generating a stable and uniform flow of propellant. The 3D-printed device generated as much or more thrust than existing droplet-emitting electrospray engines. With this technology, astronauts might quickly print an engine for a satellite without needing to wait for one to be sent up from Earth.
“Using semiconductor manufacturing doesn’t match up with the idea of low-cost access to space. We want to democratize space hardware. In this work, we are proposing a way to make high-performance hardware with manufacturing techniques that are available to more players,” says Luis Fernando Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the thrusters, which appears in Advanced Science.
He is joined on the paper by lead author Hyeonseok Kim, an MIT graduate student in mechanical engineering.
A modular approach
An electrospray engine has a reservoir of propellant that flows through microfluidic channels to a series of emitters. An electrostatic field is applied at the tip of each emitter, triggering an electrohydrodynamic effect that shapes the free surface of the liquid into a cone-shaped meniscus that ejects a stream of high-speed charged droplets from its apex, producing thrust.
The emitter tips need to be as sharp as possible to attain the electrohydrodynamic ejection of propellant at a low voltage. The device also requires a complex hydraulic system to store and regulate the flow of liquid, efficiently shuttling propellant through microfluidic channels.
The emitter array is composed of eight emitter modules. Each emitter module contains an array of four individual emitters that must work in unison, forming a larger system of interconnected modules.
“Using a one-size-fits-all fabrication approach doesn’t work because these subsystems are at different scales. Our key insight was to blend additive manufacturing methods to achieve the desired outcomes, then come up with a way to interface everything so the parts work together as efficiently as possible,” Velásquez-García says.
To accomplish this, the researchers utilized two different types of vat photo polymerization printing (VPP). VPP involves shining light onto a photosensitive resin, which solidifies to form 3D structures with smooth, high-resolution features.
The researchers fabricated the emitter modules using a VPP method called two-photon printing. This technique utilizes a highly focused laser beam to solidify resin in a precisely defined area, building a 3D structure one tiny brick, or voxel, at a time. This level of detail enabled them to produce extremely sharp emitter tips and narrow, uniform capillaries to carry propellant.
The emitter modules are fitted into a rectangular casing called a manifold block, which holds each in place and supplies the emitters with propellant. The manifold block also integrates the emitter modules with the extractor electrode that triggers propellant ejection from the emitter tips when a suitable voltage is applied. Fabricating the larger manifold block using two-photon printing would be infeasible because of the method’s low throughput and limited printing volume.
Instead, the researchers used a technique called digital light processing, which utilizes a chip-sized projector to shine light into the resin, solidifying one layer of the 3D structure at a time.
“Each technology works very well at a certain scale. Combining them, so they work together to produce one device, lets us take the best of each method,” Velásquez-García says.
Propelling performance
But 3D printing the electrospray engine components is only half the battle. The researchers also conducted chemical experiments to ensure the printing materials were compatible with the conductive liquid propellant. If not, the propellant might corrode the engine or cause it to crack, which is undesirable for hardware meant for long-term operation with little to no maintenance.
They also developed a method to clamp the separate parts together in a way that avoids misalignments which could hamper performance and ensures the device remains watertight.
In the end, their 3D-printed prototype was able to generate thrust more efficiently than larger, more expensive chemical rockets and outperformed existing droplet electrospray engines.
The researchers also investigated how adjusting the pressure of propellant and modulating the voltage applied to the engine affected the flow of droplets. Surprisingly, they achieved a wider range of thrust by modulating the voltage. This could eliminate the need for a complex network of pipes, valves, or pressure signals to regulate the flow of liquid, leading to a lighter, cheaper electrospray thruster that is also more efficient.
“We were able to show that a simpler thruster can achieve better results,” Velásquez-García says.
The researchers want to continue exploring the benefits of voltage modulation in future work. They also want to fabricate denser and larger arrays of emitter modules. In addition, they may explore the use of multiple electrodes to decouple the process of triggering of the electrohydrodynamic ejection of propellant from setting up the shape and speed of the emitted jet. In the long run, they also hope to demonstrate a CubeSat that utilizes a fully 3D-printed electrospray engine during its operation and deorbiting.
This research is funded, in part, by a MathWorks fellowship and the NewSat Project, and was carried out, in part, using MIT.nano facilities.
Gift from Sebastian Man ’79, SM ’80 supports MIT Stephen A. Schwarzman College of Computing building
The MIT Stephen A. Schwarzman College of Computing has received substantial support for its striking new headquarters on Vassar Street in Cambridge, Massachusetts. A major gift from Sebastian Man ’79, SM ’80 will be recognized with the naming of a key space in the building, enriching the academic and research activities of the MIT Schwarzman College of Computing and MIT.
Man, the first major donor to support the building since Stephen A. Schwarzman’s foundational gift established the Schwarzman College of Computing, is the chair and CEO of Chung Mei International Holdings Ltd., a manufacturer of domestic kitchen electrics and air treatment products for major international brands. Particularly supportive of education, he is a council member of the Hong Kong University of Science and Technology, serves on the Board of the Morningside College of the Chinese University of Hong Kong, and was a member of the court of the University of Hong Kong and the chair of the Harvard Business School Association of Hong Kong. His community activities include serving as a council member of The Better Hong Kong Foundation and executive committee member of the International Chamber of Commerce Hong Kong China Business Council, as well as of many other civic and business organizations. Man is also part of the MIT parent community, as his son, Brandon Man, is a graduate student in the Department of Mechanical Engineering.
Man’s gift to the college was recognized at a ceremony and luncheon in Hong Kong, where he resides, on Jan. 10. MIT Chancellor for Academic Advancement W. Eric L. Grimson PhD ’80, who hosted the event, noted that in addition to his financial generosity to the Institute, Man has played many important volunteer roles at MIT. “His service includes advancing MIT near and far as a member of the Corporation Development Committee, sharing his expertise through his recent selection as a new member of the Mechanical Engineering Visiting Committee, and, most recently, his acceptance of an invitation to join the Schwarzman College of Computing Dean’s Advisory Council,” he said.
“This new building is a home for the MIT community and a home for the people who are helping shape the future of computing and AI,” said MIT Schwarzman College of Computing Dean Daniel Huttenlocher SM ’84, PhD ’88 in a video greeting to Man and his family. “Thanks to your gift, the college is better positioned to achieve its mission of creating a positive impact on society, and for that we are deeply grateful.”
The state-of-the-art MIT Schwarzman College of Computing headquarters was designed to reflect the mission of meeting rapidly changing needs in computing through new approaches to research, education, and real-world engagement. The space provides MIT’s campus with a home base for computing research groups, new classrooms, and convening and event spaces.
Those at the Hong Kong event also enjoyed a video message from Stephen A. Schwarzman, chair, CEO, and co-founder of Blackstone and the college’s founding donor. “When we first announced the new college at MIT,” he said, “MIT said it was reshaping itself for the future. That future has come even faster than we all thought. Today, AI is part of the daily vernacular, and MIT’s ability to impact its development with your support is more tangible than ever.”
Sebastian Man spoke fondly of his years at the Institute. “The place really opened my eyes … and sharpened my intellect. It offered me a whole brave, new world. Everything was interesting and everything was exciting!
“I come from a family where my father taught us that one should always be grateful to those people and places that have helped you to become who you are today,” Man continued. “MIT instilled in me unending intellectual curiosity and the love for the unknown, and I am honored and privileged to be associated with the MIT Schwarzman College of Computing.”
Bridging philosophy and AI to explore computing ethics
During a meeting of class 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama poses the same impossible question to his students that he often asks himself in the research he leads with the Computer Assisted Programming Group at MIT:
"How do we make sure that a machine does what we want, and only what we want?"
At this moment, what some consider the golden age of generative AI, this may seem like an urgent new question. But Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to point out that this struggle is as old as humankind itself.
He begins to retell the Greek myth of King Midas, the monarch who was granted the godlike power to transform anything he touched into solid gold. Predictably, the wish backfired when Midas accidentally turned everyone he loved into gilded stone.
"Be careful what you ask for because it might be granted in ways you don't expect," he says, cautioning his students, many of them aspiring mathematicians and programmers.
Digging into MIT archives to share slides of grainy black-and-white photographs, he narrates the history of programming. We hear about the 1970s Pygmalion machine that required incredibly detailed cues, to the late '90s computer software that took teams of engineers years and an 800-page document to program.
While remarkable in their time, these processes took too long to reach users. They left no room for spontaneous discovery, play, and innovation.
Solar-Lezama talks about the risks of building modern machines that don't always respect a programmer's cues or red lines, and that are equally capable of exacting harm as saving lives.
Titus Roesler, a senior majoring in electrical engineering, nods knowingly. Roesler is writing his final paper on the ethics of autonomous vehicles and weighing who is morally responsible when one hypothetically hits and kills a pedestrian. His argument questions underlying assumptions behind technical advances, and considers multiple valid viewpoints. It leans on the philosophy theory of utilitarianism. Roesler explains, "Roughly, according to utilitarianism, the moral thing to do brings about the most good for the greatest number of people."
MIT philosopher Brad Skow, with whom Solar-Lezama developed and is team-teaching the course, leans forward and takes notes.
A class that demands technical and philosophical expertise
Ethics of Computing, offered for the first time in Fall 2024, was created through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.
The instructors alternate lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, brings his discipline's lens for examining the broader implications of today's ethical issues, while Solar-Lezama, who is also the associate director and chief operating officer of MIT's Computer Science and Artificial Intelligence Laboratory, offers perspective through his.
Skow and Solar-Lezama attend one another's lectures and adjust their follow-up class sessions in response. Introducing the element of learning from one another in real time has made for more dynamic and responsive class conversations. A recitation to break down the week's topic with graduate students from philosophy or computer science and a lively discussion combine the course content.
"An outsider might think that this is going to be a class that will make sure that these new computer programmers being sent into the world by MIT always do the right thing," Skow says. However, the class is intentionally designed to teach students a different skill set.
Determined to create an impactful semester-long course that did more than lecture students about right or wrong, philosophy professor Caspar Hare conceived the idea for Ethics of Computing in his role as an associate dean of the Social and Ethical Responsibilities of Computing. Hare recruited Skow and Solar-Lezama as the lead instructors, as he knew they could do something more profound than that.
"Thinking deeply about the questions that come up in this class requires both technical and philosophical expertise. There aren't other classes at MIT that place both side-by-side,” Skow says.
That's exactly what drew senior Alek Westover to enroll. The math and computer science double major explains, "A lot of people are talking about how the trajectory of AI will look in five years. I thought it was important to take a class that will help me think more about that."
Westover says he's drawn to philosophy because of an interest in ethics and a desire to distinguish right from wrong. In math classes, he's learned to write down a problem statement and receive instant clarity on whether he's successfully solved it or not. However, in Ethics of Computing, he has learned how to make written arguments for "tricky philosophical questions" that may not have a single correct answer.
For example, "One problem we could be concerned about is, what happens if we build powerful AI agents that can do any job a human can do?" Westover asks. "If we are interacting with these AIs to that degree, should we be paying them a salary? How much should we care about what they want?"
There's no easy answer, and Westover assumes he'll encounter many other dilemmas in the workplace in the future.
“So, is the internet destroying the world?”
The semester began with a deep dive into AI risk, or the notion of "whether AI poses an existential risk to humanity," unpacking free will, the science of how our brains make decisions under uncertainty, and debates about the long-term liabilities, and regulation of AI. A second, longer unit zeroed in on "the internet, the World Wide Web, and the social impact of technical decisions." The end of the term looks at privacy, bias, and free speech.
One class topic was devoted to provocatively asking: "So, is the internet destroying the world?"
Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Being in an environment where she can examine these types of issues is precisely why the self-described "technology skeptic" enrolled in the course.
Growing up with a mom who is hearing impaired and a little sister with a developmental disability, Ogoe became the default family member whose role it was to call providers for tech support or program iPhones. She leveraged her skills into a part-time job fixing cell phones, which paved the way for her to develop a deep interest in computation, and a path to MIT. However, a prestigious summer fellowship in her first year made her question the ethics behind how consumers were impacted by the technology she was helping to program.
"Everything I've done with technology is from the perspective of people, education, and personal connection," Ogoe says. "This is a niche that I love. Taking humanities classes around public policy, technology, and culture is one of my big passions, but this is the first course I've taken that also involves a philosophy professor."
The following week, Skow lectures on the role of bias in AI, and Ogoe, who is entering the workforce next year, but plans to eventually attend law school to focus on regulating related issues, raises her hand to ask questions or share counterpoints four times.
Skow digs into examining COMPAS, a controversial AI software that uses an algorithm to predict the likelihood that people accused of crimes would go on to re-offend. According to a 2018 ProPublica article, COMPAS was likely to flag Black defendants as future criminals and gave false positives at twice the rate as it did to white defendants.
The class session is dedicated to determining whether the article warrants the conclusion that the COMPAS system is biased and should be discontinued. To do so, Skow introduces two different theories on fairness:
"Substantive fairness is the idea that a particular outcome might be fair or unfair," he explains. "Procedural fairness is about whether the procedure by which an outcome is produced is fair." A variety of conflicting criteria of fairness are then introduced, and the class discusses which were plausible, and what conclusions they warranted about the COMPAS system.
Later on, the two professors go upstairs to Solar-Lezama's office to debrief on how the exercise had gone that day.
"Who knows?" says Solar-Lezama. "Maybe five years from now, everybody will laugh at how people were worried about the existential risk of AI. But one of the themes I see running through this class is learning to approach these debates beyond media discourse and getting to the bottom of thinking rigorously about these issues."
To keep hardware safe, cut out the code’s clues
Imagine you’re a chef with a highly sought-after recipe. You write your top-secret instructions in a journal to ensure you remember them, but its location within the book is evident from the folds and tears on the edges of that often-referenced page.
Much like recipes in a cookbook, the instructions to execute programs are stored in specific locations within a computer’s physical memory. The standard security method — referred to as “address space layout randomization” (ASLR) — scatters this precious code to different places, but hackers can now find their new locations. Instead of hacking the software directly, they use approaches called microarchitectural side attacks that exploit hardware, identifying which memory areas are most frequently used. From there, they can use code to reveal passwords and make critical administrative changes in the system (also known as code-reuse attacks).
To enhance ASLR’s effectiveness, researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have found a way to make these footprints vanish. Their “Oreo” method mitigates hardware attacks by removing randomized bits of addresses that lead to a program’s instructions before they’re translated to a physical location. It scrubs away traces of where code gadgets (or short sequences of instructions for specific tasks) are located before hackers can find them, efficiently enhancing security for operating systems like Linux.
Oreo has three layers, much like its tasty namesake. Between the virtual address space (which is used to reference program instructions) and the physical address space (where the code is located), Oreo adds a new “masked address space.” This re-maps code from randomized virtual addresses to fixed locations before it is executed within the hardware, making it difficult for hackers to trace the program’s original locations in the virtual address space through hardware attacks.
“We got the idea to structure it in three layers from Oreo cookies,” says Shixin Song, an MIT PhD student in electrical engineering and computer science (EECS) and CSAIL affiliate who is the lead author of a paper about the work. “Think of the white filling in the middle of that treat — our version of that is a layer that essentially whites out traces of gadget locations before they end up in the wrong hands.”
Senior author Mengjia Yan, an MIT associate professor of EECS and CSAIL principal investigator, believes Oreo’s masking abilities could make address space layout randomization more secure and reliable.
“ASLR was deployed in operating systems like Windows and Linux, but within the last decade, its security flaws have rendered it almost broken,” says Yan. “Our goal is to revive this mechanism in modern systems to defend microarchitecture attacks, so we’ve developed a software-hardware co-design mechanism that prevents leaking secret offsets that tell hackers where the gadgets are.”
The CSAIL researchers will present their findings about Oreo at the Network and Distributed System Security Symposium later this month.
Song and her coauthors evaluated how well Oreo could protect Linux by simulating hardware attacks in gem5, a platform commonly used to study computer architecture. The team found that it could prevent microarchitectural side attacks without hampering the software it protects.
Song observes that these experiments demonstrate how Oreo is a lightweight security upgrade for operating systems. “Our method introduces marginal hardware changes by only requiring a few extra storage units to store some metadata,” she says. “Luckily, it also has a minimal impact on software performance.”
While Oreo adds an extra step to program execution by scrubbing away revealing bits of data, it doesn’t slow down applications. This efficiency makes it a worthwhile security boost to ASLR for page-table-based virtual memory systems beyond Linux, such as those commonly found in major platforms such as Intel, AMD, and Arm.
In the future, the team will look to address speculative execution attacks — where hackers fool computers into predicting their next tasks, then steal the hidden data it leaves behind. Case in point: the infamous Meltdown/Spectre attacks in 2018.
To defend against speculative execution attacks, the team emphasizes that Oreo needs to be coupled with other security mechanisms (such as Spectre mitigations). This potential limitation extends to applying Oreo to larger systems.
“We think Oreo could be a useful software-hardware co-design platform for a broader type of applications,” says Yan. “In addition to targeting ASLR, we’re working on new methods that can help safeguard the critical crypto libraries widely used to safeguard information across people's network communication and cloud storage.”
Song and Yan wrote the paper with MIT EECS undergraduate researcher Joseph Zhang. The team’s work was supported, in part, by Amazon, the U.S. Air Force Office of Scientific Research, and ACE, a center within the Semiconductor Research Corporation sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA).
Creating smart buildings with privacy-first sensors
Gaining a better understanding of how people move through the spaces where they live and work could make those spaces safer and more sustainable. But no one wants cameras watching them 24/7.
Two former Media Lab researchers think they have a solution. Their company, Butlr, offers places like skilled nursing facilities, offices, and senior living communities a way to understand how people are using buildings without compromising privacy. Butlr uses low-resolution thermal sensors and an analytics platform to help detect falls in elderly populations, save energy, and optimize spaces for work.
“We have this vision of using the right technology to understand people’s movements and behaviors in space,” says Jiani Zeng SM ’20, who co-founded Butlr with former Media Lab research affiliate Honghao Deng. “So many resources today go toward cameras and AI that take away people’s privacy. We believe we can make our environments safer, healthier, and more sustainable without violating privacy.”
To date, the company has sold more than 20,000 of its privacy-preserving sensors to senior living and skilled nursing facilities as well as businesses with large building footprints, including Verizon, Netflix, and Microsoft. In the future, Butlr hopes to enable more dynamic spaces that can understand and respond to the ways people use them.
“Space should be like a digital user interface: It should be multi-use and responsive to your needs,” Deng says. “If the office has a big room with people working individually, it should automatically separate into smaller rooms, or lights and temperature should be adjusted to save energy.”
Building intelligence, with privacy
As an undergraduate at Tianjin University in China, Deng joined the Media Lab’s City Science Group as a visiting student in 2016. He went on to complete his master’s at Harvard University, but he returned to the Media Lab as a research affiliate and led projects around what he calls responsive architecture: spaces that can understand their users’ needs through non-camera sensors.
“My vision of the future of building environments emerged from the Media Lab,” Deng says. “The real world is the largest user interface around us — it’s not the screens. We all live in a three-dimensional world and yet, unlike the digital world, this user interface doesn’t yet understand our needs, let alone the critical situations when someone falls in a room. That could be life-saving.”
Zeng came to MIT as a master’s student in the Integrated Design and Management program, which was run jointly out of the MIT Sloan School of Management and the School of Engineering. She also worked as a research assistant at the Media Lab and the Computer Science and Artificial Intelligence Lab (CSAIL).
The pair met during a hackathon at the Media Lab and continued collaborating on various projects. During that time, they worked with MIT’s Venture Mentoring Service (VMS) and the MIT I-Corps Program. When they graduated in 2019, they decided to start a company based on the idea of creating smart buildings with privacy-preserving sensors. Crucial early funding came from the Media Lab-affiliated E14 Fund.
“I tell every single MIT founder they should have the E14 Fund in their cap table,” Deng says. “They understand what it takes to go from an MIT student to a founder, and to transition from the ‘scientist brain’ to the ‘inventor brain.’ We wouldn’t be where we are today without MIT.”
Ray Stata ’57, SM ’58, the founder of Analog Devices, is also an investor in Butlr and serves as Butlr’s board director.
“We would love to give back to the MIT community once we become successful entrepreneurs like Ray, whose advice and mentoring has been invaluable,” Deng says.
After launching, the founders had to find the right early customers for their real-time sensors, which can discern rough body shapes but no personally identifiable information. They interviewed hundreds of people before starting with owners of office spaces.
“People have zero baseline data on what’s happening in their workplace,” Deng says. “That’s especially true since the Covid-19 pandemic made people hybrid, which has opened huge opportunities to cut the energy use of large office spaces. Sometimes, the only people in these buildings are the receptionist and the cleaner.”
Butlr’s multiyear, battery-powered sensors can track daily occupancy in each room and give other insights into space utilization that can be used to reduce energy use. For companies with a lot of office space, the opportunities are immense. One Butlr customer has 40 building leases. Deng says optimizing the HVAC controls based on usage could amount to millions of dollars saved.
“We can be like the Google Analytics for these spaces without any concerns in terms of privacy,” Deng says.
The founders also knew the problem went well beyond office spaces.
“In skilled nursing facilities, instead of office spaces it’s individual rooms, all with people who may need the nurse’s help,” Deng says. “But the nurses have no visibility into what’s happening unless they physically enter the room.”
Acute care environments and senior living facilities are another key market for Butlr. The company’s platform can detect falls and instances when someone isn’t getting out of bed to alert staff. The system integrates with nurse calling systems to alert staff when something is wrong.
The “nerve cells” of the building
Butlr is continuing to develop analytics that give important insights into spaces. For instance, today the platform can use information around movement in elderly populations to help detect problems like urinary tract infections. Butlr also recently started a collaboration with Harvard Medical School’s Beth Israel Deaconess Medical Center and the University of Massachusetts at Amherst’s Artificial Intelligence and Technology Center for Connected Care in Aging and Alzheimer’s Disease. Through the project, Butlr will try to detect changes in movement that could indicate declining cognitive or physical abilities. Those insights could be used to provide aging patients with more supervision.
“In the near term we are preventing falls, but the vision is when you look up in any buildings or homes, you’ll see Butlr,” Deng says. “This could allow older adults to age in place with dignity and privacy.”
More broadly, Butlr’s founders see their work as an important way to shape the future of AI technology, which is expected to be a growing part of everyone’s lives.
“We’re the nerve cells in the building, not the eyes,” Deng says. “That’s the future of AI we believe in: AI that can transform regular rooms into spaces that understand people and can use that understanding to do everything from making efficiency improvements to saving lives in senior care communities. That’s the right way to use this powerful technology.”
Mapping mRNA through its life cycle within a cell
When Xiao Wang applied to faculty jobs, many of the institutions where she interviewed thought her research proposal — to study the life cycle of RNA in cells and how it influences normal development and disease — was too broad.
However, that was not the case when she interviewed at MIT, where her future colleagues embraced her ideas and encouraged her to be even more bold.
“What I’m doing now is even broader, even bolder than what I initially proposed,” says Wang, who holds joint appointments in the Department of Chemistry and the Broad Institute of MIT and Harvard. “I got great support from all my colleagues in my department and at Broad so that I could get the resources to conduct what I wanted to do. It’s also a demonstration of how brave the students are. There is a really innovative culture and environment here, so the students are not scared by taking on something that might sound weird or unrealistic.”
Wang’s work on RNA brings together students from chemistry, biology, computer science, neuroscience, and other fields. In her lab, research is focused on developing tools that pinpoint where in a given cell different types of messenger RNA are translated into proteins — information that can offer insight into how cells control their fate and what goes wrong in disease, especially in the brain.
“The joint position between MIT Chemistry and the Broad Institute was very attractive to me because I was trained as a chemist, and I would like to teach and recruit students from chemistry. But meanwhile, I also wanted to get exposure to biomedical topics and have collaborators outside chemistry. I can collaborate with biologists, doctors, as well as computational scientists who analyze all these daunting data,” she says.
Imaging RNA
Wang began her career at MIT in 2019, just before the Covid-19 pandemic began. Until that point, she hardly knew anyone in the Boston area, but she found a warm welcome.
“I wasn’t trained at MIT, and I had never lived in Boston before. At first, I had very small social circles, just with my colleagues and my students, but amazingly, even during the pandemic, I never felt socially isolated. I just felt so plugged in already even though it’s very a close, small circle,” she says.
Growing up in China, Wang became interested in science in middle school, when she was chosen to participate in China’s National Olympiad in math and chemistry. That gave her the chance to learn college-level course material, and she ended up winning a gold medal in the nationwide chemistry competition.
“That exposure was enough to draw me into initially mathematics, but later on more into chemistry. That’s how I got interested in a more science-oriented major and then career path,” Wang says.
At Peking University, she majored in chemistry and molecular engineering. There, she worked with Professor Jian Pei, who gave her the opportunity to work independently on her own research project.
“I really like to do research because every day you have a hypothesis, you have a design, and you make it happen. It’s like playing a video game: You have this roughly daily feedback loop. Sometimes it’s a reward, sometimes it’s not. I feel it’s more interesting than taking a class, so I think that made me decide I should apply for graduate school,” she says.
As a graduate student at the University of Chicago, she became interested in RNA while doing a rotation in the lab of Chuan He, a professor of chemistry. He was studying chemical modifications that affect the function of messenger RNA — the molecules that carry protein-building instructions from DNA to ribosomes, where proteins are assembled.
Wang ended up joining He’s lab, where she studied a common mRNA modification known as m6A, which influences how efficiently mRNA is translated into protein and how fast it gets degraded in the cell. She also began to explore how mRNA modifications affect embryonic development. As a model for these studies, she was using zebrafish, which have transparent embryos that develop from fertilized eggs into free-swimming larvae within two days. That got her interested in developing methods that could reveal where different types of RNA were being expressed, by imaging the entire organism.
Such an approach, she soon realized, could also be useful for studying the brain. As a postdoc at Stanford University, she started to develop RNA imaging methods, working with Professor Karl Deisseroth. There are existing techniques for identifying mRNA molecules that are expressed in individual cells, but those don’t offer information about exactly where in the cells different types of mRNA are located. She began developing a technique called STARmap that could accomplish this type of “spatial transcriptomics.”
Using this technique, researchers first use formaldehyde to crosslink all of the mRNA molecules in place. Then, the tissue is washed with fluorescent DNA probes that are complementary to the target mRNA sequences. These probes can then be imaged and sequenced, revealing the locations of each mRNA sequence within a cell. This allows for the visualization of mRNA molecules that encode thousands of different genes within single cells.
“I was leveraging my background in the chemistry of RNA to develop this RNA-centered brain mapping technology, which allows you to use RNA expression profiles to define brain cell types and also visualize their spatial architecture,” Wang says.
Tracking the RNA life cycle
Members of Wang’s lab are now working on expanding the capability of the STARmap technique so that it can be used to analyze brain function and brain wiring. They are also developing tools that will allow them to map the entire life cycle of mRNA molecules, from synthesis to translation to degradation, and track how these molecules are transported within a cell during their lifetime.
One of these tools, known as RIBOmap, pinpoints the locations of mRNA molecules as they are being translated at ribosomes. Another tool allows the researchers to measure how quickly mRNA is degraded after being transcribed.
“We are trying to develop a toolkit that will let us visualize every step of the RNA life cycle inside cells and tissues,” Wang says. “These are newer generations of tool development centered around these RNA biological questions.”
One of these central questions is how different cell types control their RNA life cycles differently, and how that affects their differentiation. Differences in RNA control may also be a factor in diseases such as Alzheimer’s. In a 2023 study, Wang and MIT Professor Morgan Sheng used a version of STARmap to discover how cells called microglia become more inflammatory as amyloid-beta plaques form in the brain. Wang’s lab is also pursuing studies of how differences in mRNA translation might affect schizophrenia and other neurological disorders.
“The reason we think there will be a lot of interesting biology to discover is because the formation of neural circuits is through synapses, and synapse formation and learning and memory are strongly associated with localized RNA translation, which involves multiple steps including RNA transport and recycling,” she says.
In addition to investigating those biological questions, Wang is also working on ways to boost the efficiency of mRNA therapeutics and vaccines by changing their chemical modifications or their topological structure.
“Our goal is to create a toolbox and RNA synthesis strategy where we can precisely tune the chemical modification on every particle of RNA,” Wang says. “We want to establish how those modifications will influence how fast mRNA can produce protein, and in which cell types they could be used to more efficiently produce protein.”
Puzzling out climate change
Shreyaa Raghavan’s journey into solving some of the world’s toughest challenges started with a simple love for puzzles. By high school, her knack for problem-solving naturally drew her to computer science. Through her participation in an entrepreneurship and leadership program, she built apps and twice made it to the semifinals of the program’s global competition.
Her early successes made a computer science career seem like an obvious choice, but Raghavan says a significant competing interest left her torn.
“Computer science sparks that puzzle-, problem-solving part of my brain,” says Raghavan ’24, an Accenture Fellow and a PhD candidate in MIT’s Institute for Data, Systems, and Society. “But while I always felt like building mobile apps was a fun little hobby, it didn’t feel like I was directly solving societal challenges.”
Her perspective shifted when, as an MIT undergraduate, Raghavan participated in an Undergraduate Research Opportunity in the Photovoltaic Research Laboratory, now known as the Accelerated Materials Laboratory for Sustainability. There, she discovered how computational techniques like machine learning could optimize materials for solar panels — a direct application of her skills toward mitigating climate change.
“This lab had a very diverse group of people, some from a computer science background, some from a chemistry background, some who were hardcore engineers. All of them were communicating effectively and working toward one unified goal — building better renewable energy systems,” Raghavan says. “It opened my eyes to the fact that I could use very technical tools that I enjoy building and find fulfillment in that by helping solve major climate challenges.”
With her sights set on applying machine learning and optimization to energy and climate, Raghavan joined Cathy Wu’s lab when she started her PhD in 2023. The lab focuses on building more sustainable transportation systems, a field that resonated with Raghavan due to its universal impact and its outsized role in climate change — transportation accounts for roughly 30 percent of greenhouse gas emissions.
“If we were to throw all of the intelligent systems we are exploring into the transportation networks, by how much could we reduce emissions?” she asks, summarizing a core question of her research.
Wu, an associate professor in the Department of Civil and Environmental Engineering, stresses the value of Raghavan's work.
“Transportation is a critical element of both the economy and climate change, so potential changes to transportation must be carefully studied,” Wu says. “Shreyaa’s research into smart congestion management is important because it takes a data-driven approach to add rigor to the broader research supporting sustainability.”
Raghavan’s contributions have been recognized with the Accenture Fellowship, a cornerstone of the MIT-Accenture Convergence Initiative for Industry and Technology.
As an Accenture Fellow, she is exploring the potential impact of technologies for avoiding stop-and-go traffic and its emissions, using systems such as networked autonomous vehicles and digital speed limits that vary according to traffic conditions — solutions that could advance decarbonization in the transportation section at relatively low cost and in the near term.
Raghavan says she appreciates the Accenture Fellowship not only for the support it provides, but also because it demonstrates industry involvement in sustainable transportation solutions.
“It’s important for the field of transportation, and also energy and climate as a whole, to synergize with all of the different stakeholders,” she says. “I think it’s important for industry to be involved in this issue of incorporating smarter transportation systems to decarbonize transportation.”
Raghavan has also received a fellowship supporting her research from the U.S. Department of Transportation.
“I think it’s really exciting that there’s interest from the policy side with the Department of Transportation and from the industry side with Accenture,” she says.
Raghavan believes that addressing climate change requires collaboration across disciplines. “I think with climate change, no one industry or field is going to solve it on its own. It’s really got to be each field stepping up and trying to make a difference,” she says. “I don’t think there’s any silver-bullet solution to this problem. It’s going to take many different solutions from different people, different angles, different disciplines.”
With that in mind, Raghavan has been very active in the MIT Energy and Climate Club since joining about three years ago, which, she says, “was a really cool way to meet lots of people who were working toward the same goal, the same climate goals, the same passions, but from completely different angles.”
This year, Raghavan is on the community and education team, which works to build the community at MIT that is working on climate and energy issues. As part of that work, Raghavan is launching a mentorship program for undergraduates, pairing them with graduate students who help the undergrads develop ideas about how they can work on climate using their unique expertise.
“I didn’t foresee myself using my computer science skills in energy and climate,” Raghavan says, “so I really want to give other students a clear pathway, or a clear sense of how they can get involved.”
Raghavan has embraced her area of study even in terms of where she likes to think.
“I love working on trains, on buses, on airplanes,” she says. “It’s really fun to be in transit and working on transportation problems.”
Anticipating a trip to New York to visit a cousin, she holds no dread for the long train trip.
“I know I’m going to do some of my best work during those hours,” she says. “Four hours there. Four hours back.”
Can deep learning transform heart failure prevention?
The ancient Greek philosopher and polymath Aristotle once concluded that the human heart is tri-chambered and that it was the single most important organ in the entire body, governing motion, sensation, and thought.
Today, we know that the human heart actually has four chambers and that the brain largely controls motion, sensation, and thought. But Aristotle was correct in observing that the heart is a vital organ, pumping blood to the rest of the body to reach other vital organs. When a life-threatening condition like heart failure strikes, the heart gradually loses the ability to supply other organs with enough blood and nutrients that enables them to function.
Researchers from MIT and Harvard Medical School recently published an open-access paper in Nature Communications Medicine, introducing a noninvasive deep learning approach that analyzes electrocardiogram (ECG) signals to accurately predict a patient’s risk of developing heart failure. In a clinical trial, the model showed results with accuracy comparable to gold-standard but more-invasive procedures, giving hope to those at risk of heart failure. The condition has recently seen a sharp increase in mortality, particularly among young adults, likely due to the growing prevalence of obesity and diabetes.
“This paper is a culmination of things I’ve talked about in other venues for several years,” says the paper’s senior author Collin Stultz, director of Harvard-MIT Program in Health Sciences and Technology and affiliate of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic). “The goal of this work is to identify those who are starting to get sick even before they have symptoms so that you can intervene early enough to prevent hospitalization.”
Of the heart’s four chambers, two are atria and two are ventricles — the right side of the heart has one atrium and one ventricle, and vice versa. In a healthy human heart, these chambers operate in a rhythmic synchrony: oxygen-poor blood flows into the heart via the right atrium. The right atrium contracts and the pressure generated pushes the blood into the right ventricle where the blood is then pumped into the lungs to be oxygenated. The oxygen-rich blood from the lungs then drains into the left atrium, which contracts, pumping the blood into the left ventricle. Another contraction follows, and the blood is ejected from the left ventricle via the aorta, flowing into veins branching out to the rest of the body.
“When the left atrial pressures become elevated, the blood drain from the lungs into the left atrium is impeded because it’s a higher-pressure system,” Stultz explains. In addition to being a professor of electrical engineering and computer science, Stultz is also a practicing cardiologist at Mass General Hospital (MGH). “The higher the pressure in the left atrium, the more pulmonary symptoms you develop — shortness of breath and so forth. Because the right side of the heart pumps blood through the pulmonary vasculature to the lungs, the elevated pressures in the left atrium translate to elevated pressures in the pulmonary vasculature.”
The current gold standard for measuring left atrial pressure is right heart catheterization (RHC), an invasive procedure that requires a thin tube (the catheter) attached to a pressure transmitter to be inserted into the right heart and pulmonary arteries. Physicians often prefer to assess risk noninvasively before resorting to RHC, by examining the patient’s weight, blood pressure, and heart rate.
But in Stultz’s view, these measures are coarse, as evidenced by the fact that one-in-four heart failure patients is readmitted to the hospital within 30 days. “What we are seeking is something that gives you information like that of an invasive device, other than a simple weight scale,” Stultz says.
In order to gather more comprehensive information on a patient’s heart condition, physicians typically use a 12-lead ECG, in which 10 adhesive patches are stuck onto the patient and linked with a machine that produces information from 12 different angles of the heart. However, 12-lead ECG machines are only accessible in clinical settings and they are also not typically used to assess heart failure risk.
Instead, what Stultz and other researchers propose is a Cardiac Hemodynamic AI monitoring System (CHAIS), a deep neural network capable of analyzing ECG data from a single lead — in other words, the patient only needs to have a single adhesive, commercially-available patch on their chest that they can wear outside of the hospital, untethered to a machine.
To compare CHAIS with the current gold standard, RHC, the researchers selected patients who were already scheduled for a catheterization and asked them to wear the patch 24 to 48 hours before the procedure, although patients were asked to remove the patch before catheterization took place. “When you get to within an hour-and-a-half [before the procedure], it’s 0.875, so it’s very, very good,” Stultz explains. “Thereby a measure from the device is equivalent and gives you the same information as if you were cathed in the next hour-and-a-half.”
“Every cardiologist understands the value of left atrial pressure measurements in characterizing cardiac function and optimizing treatment strategies for patients with heart failure,” says Aaron Aguirre SM '03, PhD '08, a cardiologist and critical care physician at MGH. “This work is important because it offers a noninvasive approach to estimating this essential clinical parameter using a widely available cardiac monitor.”
Aguirre, who completed a PhD in medical engineering and medical physics at MIT, expects that with further clinical validation, CHAIS will be useful in two key areas: first, it will aid in selecting patients who will most benefit from more invasive cardiac testing via RHC; and second, the technology could enable serial monitoring and tracking of left atrial pressure in patients with heart disease. “A noninvasive and quantitative method can help in optimizing treatment strategies in patients at home or in hospital,” Aguirre says. “I am excited to see where the MIT team takes this next.”
But the benefits aren’t just limited to patients — for patients with hard-to-manage heart failure, it becomes a challenge to keep them from being readmitted to the hospital without a permanent implant, taking up more space and more time of an already beleaguered and understaffed medical workforce.
The researchers have another ongoing clinical trial using CHAIS with MGH and Boston Medical Center that they hope to conclude soon to begin data analysis.
“In my view, the real promise of AI in health care is to provide equitable, state-of-the-art care to everyone, regardless of their socioeconomic status, background, and where they live,” Stultz says. “This work is one step towards realizing this goal.”