Feed aggregator
How Push Notifications Can Betray Your Privacy (and What to Do About It)
A phone’s push notifications can contain a significant amount of information about you, your communications, and what you do throughout the day. They’re important enough to government investigations that Apple and Google now both require a judge’s order to hand details about push notifications over to law enforcement, and even with that requirement Apple shares data on hundreds of users. More recently, we also learned from a 404 Media report that law enforcement forensic extraction tools can unearth the text from deleted notifications, including those from secure messaging tools, like Signal. The good news is that you can mitigate some of this risk.
There are two points where notifications may betray your privacy: when they’re transmitted over cloud servers and once they land on the device. Let’s start with the cloud. It might seem like push notifications come directly from an app, but they are typically routed through either Apple or Google’s servers first (depending on if you use iOS or Android). According to a letter sent to the Department of Justice by Senator Wyden, the content of those notifications may be visible to Apple and Google, and at the very least the companies collect some metadata about what apps send a notification and when. App providers have to make the decision to hide the content from Apple and Google and implement that functionality; Signal is one app that does this.
Then, once the notifications land on your phone, depending on your settings, the notification content may be visible on your lock screen without needing to unlock the device. This can be dangerous if you lose your device, someone steals it, or it’s confiscated by law enforcement.
You may clear notifications after looking at them. But it turns out the content notifications get recorded in your device’s internal storage, which then makes them susceptible to recovery with certain types of forensic tools. Notification content may even persist after the app is deleted, if the OS doesn’t fully purge the app’s notification data.
We still have a lot of unanswered questions about how the notification databases work on devices. We do not know how long notifications are stored, or whether they’re backed up to the cloud, in which case the cloud provider could get backdoor access to the content of messages if the backups are enabled and not end-to-end encrypted. This may also make backups vulnerable to law enforcement demands for data.
Which is all to say that there are myriad ways that law enforcement can access the content or metadata of push notifications. Let’s fix that.
Consider the Strongest Notification Protections for Your Secure Messaging AppsSecure chat tools are designed to keep the content of the messages safe inside the app. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the content of your messages, and they’re only accessible on your and your recipients’ devices. Once messages land on a device, it’s still important to consider some privacy precautions, particularly with notifications.
Signal
Signal offers three levels of information to include in notifications, all which are pretty self explanatory:
- Name, Content, and Actions (Name and message on Android) shows the entirety of a message as well as who sent it (on iPhone you can also slide to reply, mark as read, or call back).
- Name only only shows the name of the sender.
- No Name or Content (No name or message on Android) will only show that you have a message from Signal, not who sent it or what it’s about.
To change your settings:
- On iPhone: Tap your profile picture, then Settings > Notifications > Show.
- On Android: Tap your profile picture, then Notifications > Show.
WhatsApp
WhatsApp only has one option for this, and it’s currently limited to iPhone, but you can at least tell the app not to include the content of a message in the notification:
- Open WhatsApp for iPhone, tap the “You” bar, then Notifications, and disable the Show preview option.
Check your other apps to see if they offer similar settings.
Limit Your Notifications Device-WideSince Apple and Google manage push notifications for their respective devices, they also have some visibility into certain data. Push notification data can include certain types of metadata, like which app sent a notification and when, as well as the account ID associated with the phone. In some cases, Apple and Google may have access to unencrypted content, including the content of the text in a notification or other information from the app itself.
For most app notifications, there’s no simple way to easily figure out what metadata might be gleaned from a notification, or if the notification is unencrypted or not. But some app developers have described details along these lines. For example, Signal president Meredith Whittaker explained on social media how the Signal app handles notifications entirely on-device. Searching online for an app name along with “notification privacy,” “notification encryption” or “notification metadata” may help answer your questions, or you may need to dig around in support forums for the app.
It’s also good to reconsider whether any app should be sending you notifications to begin with. Aside from a potential decrease in the number of distractions you endure throughout the day, or the level of chaos on display on your lockscreen, limiting the apps that can send notifications and what content is visible in them can improve your privacy with respect to the sorts of metadata that may be gathered by the companies, as well as any content that may be viewable if someone has physically accessed your device.
To check and change your settings on iPhone
- Open Settings > Notifications.
- On the Show Previews option, you can choose whether to show the content of notifications on the lock screen, “Always,” which doesn’t require unlocking the device, “When Unlocked,” which does, and “Never,” which means notifications won’t have any details, just that you have a notification in an app.
- Alternatively, you can scroll down and change these settings per app. Just tap the app name, then the Show Previews menu, and choose how you’d like them to appear. Or, if you’ve decided you don’t want notifications from that app at all, uncheck the Allow Notifications option.
To check and change your settings on Android
The core version of Android relies on app developers to develop specific settings more than controlling them on a platform-wide level.
- Open Settings > Notifications > App notifications to disable notifications from any app completely. Some apps may also offer internal notification options for specific types of notices, like new messages, that you can control in the app itself. Tap an app name, then tap the Addition settings in the app option to potentially customize it more.
- You can also experiment with the sensitive content setting. This is up to the developer to set properly, but when done so, most notifications will require at least unlocking the device to see them. Open Settings > Notifications > Notifications on lock screen and disable “Show sensitive content.”
In an attempt to make notifications easier to skim, both Android and iOS offer optional ways to get notification summaries using their AI tools that summarize the content of notifications. On an individual app level, WhatsApp offers this as well. Some of these summarization tools, like Apple’s, run on the device, while others, like WhatsApp’s, do not. This can all be a lot to keep track of, and sending data off device may create some level of risk for some messages.
Since this is a bit more complicated, we have another blog post that walks through the steps to take to protect messaging from accidentally ending up in AI tools built into Apple and Google's devices. For WhatsApp specifically, we have a blog detailing when you might want to turn on the app’s “Advanced Chat Privacy” feature, which can disable summaries for both yourself and others in the chat.
Balancing security, privacy, and usability with something like push notifications is a complicated task. At the very least, Apple and Google should better ensure that the content of these notifications isn’t transmitted over their servers in plain text. The companies need to also make sure that device operating systems don’t back up the notification database to the cloud, and when an app is deleted, that all notification data is purged.
We appreciate that apps like Signal allow you to control what’s visible with notifications on a per-app basis, and we’d like to see this level of granularity of choices in other secure messaging tools, like WhatsApp. Likewise, more apps should handle push notifications similarly to the way Signal does, where a ping is sent to wake up the app to check for messages, and the content of that message is never sent across servers.
3 Questions: A running shoe that adapts to the runner
Granular convection takes place everywhere: candy in a box, sand on the beach, foam in a cushion. Often referred to as the “Brazil nut effect,” granular convection occurs when solid, independent, irregularly shaped particles reorder themselves following agitation. One might think, intuitively, that the larger pieces fall to the bottom, but it is their size, and not their density, that alters their location, and the larger pieces end up on the top.
In the world of competitive running, elite athletes have their footwear individually designed for needs such as foot shape and pressure points. Comfortable and supportive footwear can assist optimal performance. However, most footwear is standardized and doesn’t offer a personalized performance.
MIT associate professor of architecture Skylar Tibbits, founder and co-director of the Self-Assembly Lab in the MIT School of Architecture and Planning, along with various MIT colleagues, have been developing tests surrounding the phenomenon of granular convection within the midsole — or middle layer, between the outsole (bottom) and insole (top) — of running shoes to create a shoe that evolves over time to provide an individualized product. As we approach the running of the 130th Boston Marathon — one of the world's most prominent displays of footwear supporting athletes — Tibbits answers three questions about bead-based technologies as applied to running shoes.
Q. What are the advantages of an adaptive midsole over the current bead-based midsole technology?
A. Currently, the standard midsoles in running shoes are static. They aren’t customized to the shape of our foot or the force we deliver when running or walking. They also don’t change or improve over time as we run in them. Some products — blue jeans, baseball gloves, and hats, for example — get more comfortable as you wear them. We were exploring how this could be taken even further with a running shoe so that you would have the cushion, support, and stiffness where you need it and have it improve these features as you use it so that, over time, the actual performance of the shoe gets better. It’s not a personalized fit; it’s a performance-driven adaptation.
There are three advantages to this technology. The first is that customization is not only for elite athletes. Most elite athletes are already getting gear personalized for their specific needs by their sponsoring brands. Now, customized gear can be available for everyone. Second, customized gear currently does not adapt to an athlete’s performance. But you need your footwear to evolve because your needs as a runner evolve. You need to get the comfort, cushioning, and protection, to support your performance.
A third advantage is the manufacturability of this type of shoe. Custom shoes are now made in a factory for the specifications of a single athlete. That doesn’t scale. You can’t produce a manufacturing process where every single person’s shoe is going to be custom-made for them. We’ve shown that every shoe can be the same and mass produced, but, over time, the shoe will evolve to your personal needs. That is a way to get customization without having to change the manufacturing process.
Q: Why the interest in granular systems, and granular convection in particular?
A: We’ve worked on reversible construction techniques with granular jamming over the years, which is at the opposite end of the spectrum. Granular convection promotes the movement of particles; the more they are mixed, the more they separate. Our vision was looking at footwear that adapts with you over time. We thought we could use granular convection as a mechanism for the footwear to evolve.
We put particles with different stiffness, different material properties, and unique sizes, so that over time, we know the softer particles, which are the larger particles, will rise to the top, and the stiffer particles that are smaller will sink to the bottom, towards the outsole. We designed how these particles moved based on the vibration and the impact of walking and running.
We also designed the container. We had three different particle sizes; we conducted tests to try to dial it into the right number of steps for it to evolve over the course of about 20,000 steps. About the length of a marathon. We could either speed up or slow down that process.
Q. Are there future applications of customization for granular convection? If so, where do you see your research going next?
A: Any products that need cushioning systems that improve over time would benefit from this technology. With custom packaging, you have molded foam that fits around a product — a flat-screen television, for example — that is tossed out after it has been shipped from factory to distributor to customer. I worked with a furniture company that wrapped blankets around chairs for transport, but there were still some chairs that sustained damage. Maybe we could develop a blanket or some kind of material that adapts over the journey so that it creates just the right amount of cushion for the shape and property of that product and, once it’s delivered, its shape could be “released” and then reused. How can we reset this product in a timely manner so it can be used again?
Wheelchairs are another product where we would want seat cushions that can adapt to how a person sits, the force distribution, and the environment in which they are being used, such as a sidewalk or a gravel path. We considered this as it relates to footwear. You might want to reset your shoes because you’re going to be running road races on a given day and trail races another day. How can we empty and refill the midsole with different particles so it can adapt again? More importantly, how can we upgrade or change our shoes without throwing them away? This is exciting future work for us to explore.
Largest US renewable project begins generating electricity
California judge pauses climate lawsuits against oil and gas industry pending Supreme Court review
EPA stopped tracking emissions. So this university stepped in.
DOJ: Treaty withdrawal bolsters case against NY climate law
Political ‘circus’ engulfs Texas refining hub as water crisis nears
GOP senators take aim at federal court system’s research arm
Shareholder proposals plummet amid Trump-era crackdown
California lawmakers approve bills restricting air quality regulator rules
Hawaiian Electric’s $1B power project is in a flood zone
Turkey demands decisive climate action ahead of UN talks
Heavy rains in Haiti kill 12 people, damage hundreds of homes
Human Trust of AI Agents
Interesting research: “Humans expect rationality and cooperation from LLM opponents in strategic games.”
Abstract: As Large Language Models (LLMs) integrate into our social and economic interactions, we need to deepen our understanding of how humans respond to LLMs opponents in strategic settings. We present the results of the first controlled monetarily-incentivised laboratory experiment looking at differences in human behaviour in a multi-player p-beauty contest against other humans and LLMs. We use a within-subject design in order to compare behaviour at the individual level. We show that, in this environment, human subjects choose significantly lower numbers when playing against LLMs than humans, which is mainly driven by the increased prevalence of ‘zero’ Nash-equilibrium choices. This shift is mainly driven by subjects with high strategic reasoning ability. Subjects who play the zero Nash-equilibrium choice motivate their strategy by appealing to perceived LLM’s reasoning ability and, unexpectedly, propensity towards cooperation. Our findings provide foundational insights into the multi-player human-LLM interaction in simultaneous choice games, uncover heterogeneities in both subjects’ behaviour and beliefs about LLM’s play when playing against them, and suggest important implications for mechanism design in mixed human-LLM systems...
A regulatory loophole could delay ozone recovery by years
Often hailed as the most successful international environmental agreement of all time, the 1987 Montreal Protocol continues to successfully phase out the global production of chemicals that were creating a growing hole in the ozone layer, causing skin cancer and other adverse health effects.
MIT-led studies have since shown the subsequent reduction in ozone-depleting substances is helping stratospheric ozone to recover. (It could return to 1980 levels by as early as 2040, according to some estimates.) But the Montreal Protocol made an exception in its rules for the use of ozone-depleting substances as feedstocks in the production of other materials. That’s because it was thought that only a small amount — just 0.5 percent — of the ozone-depleting substances used for this purpose would leak into the atmosphere.
In recent years, however, scientists have observed more ozone-depleting substances in the atmosphere than expected, and have increased their estimates of leakage from feedstocks.
Now an international group of scientists, including researchers from MIT, has calculated the impact of different feedstock leakage rates on the ozone’s fragile recovery. They find the higher leakage rates, if not addressed by the Montreal Protocol, could delay ozone recovery by about seven years.
“We’ve realized in the last few years that these feedstock chemicals are a bug in the system,” says author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry, who was part of the original research team that linked the chemicals to the ozone hole. “Production of ozone-depleting substances has pretty much ceased around the world except for this one use, which is when you have a chemical you convert into something else.”
The paper, which was published in Nature Communications today, is the first to comprehensively quantify the impact of leaked feedstocks, which are currently used to make plastics and nonstick chemicals. They are also used to make substitute chemicals for the ones regulated under the Montreal Protocol. The researchers say it shows the importance of curbing use and preventing leakage of such feedstocks, especially as the production of their end products, like plastic, is projected to grow.
“We’ve gotten to the point where, if we want the protocol to be as successful in the future as it has been in the past, the parties really need to think about how to tighten up the emissions of these industrial processes,” says first author Stefan Reimann of the Swiss Federal Laboratories for Materials Science and Technology.
“To me, it’s only fair, because so many other things have already been completely discontinued. So why should this exemption exist if it’s going to be damaging?” says Solomon.
Joining Reimann on the paper are his colleagues Martin K. Vollmer and Lukas Emmenegger; Luke Western and Susan Solomon of the MIT Center for Sustainability Science and Strategy and the Department of Earth, Atmospheric and Planetary Sciences; David Sherry of Nolan-Sherry and Associates Ltd; Megan Lickley of Georgetown University; Lambert Kuijpers of the A/gent Consultancy b.v.; Stephen A. Montzka and John Daniel of the National Oceanic and Atmospheric Administration; Matthew Rigby of the University of Bristol; Guus J.M. Velders of Utrecht University; Qing Liang of the NASA Goddard Space Flight Center; and Sunyoung Park of Kyungpook National University.
Repairing the ozone
In 1985, scientists discovered a growing hole in the ozone layer over Antarctica that was allowing more of the sun’s harmful ultraviolet radiation to reach Earth’s surface. The following year, researchers including Solomon traveled to Antarctica and discovered the cause of the ozone deterioration: a class of chemicals called chlorofluorocarbons, or CFCs, which were then used in refrigeration, air conditioning, and aerosols.
The revelations led to the Montreal Protocol, an international treaty involving 197 countries and the European Union restricting the use of CFCs. The subsequent decision to exempt the use of ozone-depleting substances for use as feedstocks was based partially on industry estimates of how much of their feedstocks leaked.
“It was thought that the emissions of these substances as a feedstock were minor compared to things like refrigerants and foams,” Western says. “It was also believed that leakage from these sources was minor — around half a percent of what went in — because people would essentially be leaking their profits if their feedstocks were released into the atmosphere.”
Unfortunately, some of those assumptions are no longer true. Western and Reimann are part of the Advanced Global Atmospheric Gases Experiment (AGAGE), a global monitoring network co-founded by Ronald Prinn, MIT’s TEPCO Professor of Atmospheric Science. AGAGE monitors emissions of ozone-depleting substances around the world, and in recent years researchers have revised their estimates of feedstock leakage upwards, to about 3.6 percent. For some chemicals, the number was even higher.
In the new paper, the researchers estimated a 3.6 percent feedstock leakage as the baseline for most chemicals. They compared that with a scenario where 0.5 percent of feedstocks are leaked from 2025 onward and a scenario with zero feedstock-related emissions. The researchers also looked at production trends between 2014 and 2024 to project how much of each specific ozone-depleting chemical would be used as feedstock between 2025 and 2100.
The analysis shows that until 2050, total ozone-depleting chemical emissions decrease in all scenarios as rising feedstock emissions are offset by declining uses enforced by the Montreal Protocol. In the scenario with continued 3.6 percent leakage, however, emissions level off around 2045, and total emissions only decrease by 50 percent overall by 2100.
The researchers then evaluated the impact of feedstock-related emissions on stratospheric ozone depletion. In the scenario where feedstock leakage is 0.5 percent, the ozone returns to its 1980 status by 2066. In the scenario with zero feedstock leakage, the ozone reclaims its 1980 health in 2065. But in the baseline scenario, the recovery is delayed about seven years, to 2073.
“This paper sends an important message that these emissions are too high and we have to find a way to reduce them,” Reimann says. “Either that means no longer using these substances as feedstocks, swapping out chemicals, or reducing the leakage emissions when they are used.”
A global response
Solomon is confident industries will be able to adjust to the latest findings.
“There are a lot of innovators in the chemical industry,” Solomon says. “They make new chemicals and improve chemicals for a living. It’s true they can perhaps get too entrenched with certain chemicals, but it doesn’t happen that often. Actually, they’re usually quite willing to consider alternatives. There are thousands of other chemicals that could be used instead, so why not switch? That’s been the attitude.”
Solomon says the fact that AGAGE can detect the impact of feedstock emissions is a testament to the progress the world has made in reducing emissions from other sources up to this point. She believes raising awareness of the feedstock problem is the first step.
“This isn’t the first time that the AGAGE Network has made measurements that have allowed the world to see we need to do a little better here or there,” Western says. “Often, it’s just a mistake. Sometimes all it takes is making people more aware of these things to tighten up some processes.”
Members of the Montreal Protocol meet every year. In those meetings, they split into working groups around different topics. Feedstock emissions are already one of those topics, so participants will review the evidence together. Typically, they release a statement about mitigation strategies if needed.
“We wanted to raise the warning flag that something is wrong here,” Reimann says. “We could reduce the period of ozone depletion by years. It might not sound like a long time, but if you could count the skin cancer cases you’d avoid in that time, it would seem quite significant.”
The work was supported, in part, by the U.S. National Science Foundation, the U.S. National Aeronautics and Space Administration (NASA), the Swiss Federal Office for the Environment, the VoLo Foundation, the United Kingdom Natural Environment Research Council, and the Korea Meteorological Administration Research and Development Program.
Youth may increase vulnerability to a carcinogen found in contaminated water and some drugs
A new study from MIT suggests that a carcinogen that has been found in medications and in drinking water contaminated by chemical plants may have a much more severe impact on children than adults.
In a study of mice, the researchers found that juveniles exposed to drinking water containing this compound, known as NDMA, showed dramatically higher rates of DNA damage and cancer than adults.
The findings may help to explain an epidemiological association between childhood cancer and prenatal exposure to NDMA in people living near a contaminated site in Wilmington, Massachusetts, the researchers say. The study also suggests that it is critical to evaluate the impact of potential carcinogens across all ages.
“We really hope that groups that do safety testing will change their paradigm and start looking at young animals, so that we can catch potential carcinogens before people are exposed,” says Bevin Engelward, an MIT professor of biological engineering. “As a solution to cancer, cancer prevention is clearly much better than cancer treatment, so we hope we can spot dangerous chemicals before people are exposed, and therefore prevent extensive cancer risk.”
MIT postdoc Lindsay Volk is the lead author of the paper. Engelward is the senior author of the study, which appears in Nature Communications.
From DNA damage to cancer
NDMA (N-Nitrosodimethylamine) can be generated as a byproduct of many industrial chemical processes, and it is also found in cigarette smoke and processed meats. In recent years, NDMA has been detected in some formulations of the drugs valsartan, ranitidine, and metformin. It was also found in drinking water in Wilmington, Massachusetts, in the 1990s, as a result of contamination from the Olin Chemical site.
In 2021, a study from the Massachusetts Department of Health suggested a link between that water contamination and an elevated incidence of childhood cancer in Wilmington. Between 1990 and 2000, 22 Wilmington children were diagnosed with cancer. The contaminated wells were closed in 2003.
Also in 2021, Engelward and others at MIT published a study on the mechanism of how NDMA can lead to cancer. In the new Nature Communications paper, Engelward and her colleagues set out to see if they could determine why the compound appears to affect children more than adults.
Most studies that evaluate potential carcinogens are performed in mice that are at least 4 to 6 weeks old, and often older. For this study, the researchers studied two groups of mice — one 3 weeks old (juvenile), and one 6 months old (adult). Each group was given drinking water with low levels of NDMA, about five parts per million, for two weeks.
Inside the body, NDMA is metabolized by a liver enzyme called CYP2E1. This produces toxic metabolites that can damage DNA by adding a small chemical group known as a methyl group to DNA bases, creating lesions known as adducts.
When the researchers examined the livers of the mice, they found that juveniles and adults showed similar levels of DNA adducts. However, there were dramatic differences in what happened after that initial damage. In juvenile mice, DNA adducts led to significant accumulation of double-stranded DNA breaks, which occur when cells try to repair adducts. These breaks produce mutations that eventually lead to the development of liver cancer.
In the adult mice, the researchers saw essentially no double-stranded breaks and significantly fewer mutations compared to juveniles. Furthermore, the livers did not develop severe pathology, including tumors, even though they experienced the same initial level of DNA adducts.
“The initial structural changes to the DNA had very different consequences depending on age,” Engelward says. “The double-stranded breaks were exclusively observed in the young.”
Further experiments revealed that these differences stem from differences in the rates of cell proliferation. Cells in the juvenile liver divide rapidly, giving them more opportunity to turn DNA adducts into mutations, while cells of the adult liver rarely divide.
“This really emphasizes the overall problem that we’re trying to highlight in the paper,” Volk says. “With toxicological studies, oftentimes the standard is to use fully grown mice. At that point, they’re already slowing down cell division, so if we are testing the harmful effects of NDMA in adult mice, then we’re completely missing how vulnerable particular groups are, such as younger animals.”
While most of these effects were seen in the liver, because that is where NDMA is metabolized, a few of the mice developed other types of cancer, including lung cancer and lymphoma.
Adult risk is not zero
For most of these studies, the researchers used mice that had two of their DNA repair systems knocked out. This speeds up the mutation process, allowing the researchers to see the effects of NDMA exposure more easily, without needing to study a large population of mice.
However, a small study in mice with normal DNA repair showed that juveniles experienced NDMA-induced double-strand breaks, regenerative proliferation, and large-scale mutations that were completely absent in adults. This occurs because the fast-growing juveniles possess highly active DNA replication machinery that encounters the DNA adducts before the cell has time to repair them.
The researchers also found that if they treated adult mice with thyroid hormone, which stimulates proliferation of liver cells, those cells began accumulating mutations as quickly as the juvenile liver cells. Previous work done in the Engelward laboratory has shown that inflammation can also stimulate cell proliferation-driven vulnerability to DNA damage, so the findings of this study suggest that anything that causes liver inflammation could make the adult liver more vulnerable to damage caused by agents such as NDMA.
“We certainly don’t want to say that adults are completely resistant to NDMA,” Volk says. “Everything impacts your susceptibility to a carcinogen, whether that’s your genetics, your age, your diet, and so forth. In adults, if they have a viral infection, or a high fat diet, or chronic binge alcohol drinking, this can impact proliferation within the liver and potentially make them susceptible to NDMA.”
The researchers are now investigating how a high-fat diet might influence cancer development in mice that also have exposure to NDMA.
This collaborative effort across several MIT labs was funded by the National Institutes of Environmental and Health Sciences (NIEHS) Superfund Research Program, a NIEHS Core Center Grant, a National Institutes of Health Training Grant, and the Anonymous Fund for Climate Action.
MIT study reveals a new role for cell membranes
Cells are enveloped by a lipid membrane that gives them structure and provides a barrier between the cell and its environment. However, evidence has recently emerged suggesting that these membranes do more than simply provide protection — they also influence the behavior of the protein receptors embedded in them.
A new study from MIT chemists adds further support to that idea. The researchers found that changing the composition of the cell membrane can alter the function of a membrane receptor that promotes proliferation.
Epidermal growth factor receptor (EGFR) can be locked into an overactive state when the cell membrane has a higher than normal concentration of negatively charged lipids, the researchers found. This may help to explain why cancer cells with high levels of those lipids enter a highly proliferative state that allows them to divide uncontrollably.
“The longstanding dogma of what a membrane does is that it’s just a scaffold, an organizational structure. However, there have been increasing observations that suggest that maybe these membrane lipids are actually playing a role in receptor function,” says Gabriela Schlau-Cohen, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT and the senior author of the study.
The findings open up the possibility of discovering new ways to treat tumors by neutralizing the negative charge, which might turn down EGFR signaling, she adds.
Shwetha Srinivasan PhD ’22 is the lead author of the paper, which appears in the journal eLife. Other authors include former MIT postdocs Xingcheng Lin and Raju Regmi, Xuyan Chen PhD ’25, and Bin Zhang, an associate professor of chemistry at MIT.
Receptor dynamics
The EGF receptor, which is found on cells that line body surfaces and organs, is one of many receptors that help control cell growth. Some types of cancer, especially lung cancer and glioblastoma, overexpress the EGF receptor, which can lead to uncontrolled growth.
Like most receptor proteins, EGFR spans the entire cell membrane. Until recently, it has been challenging to study how signals are conveyed across the entire receptor, because of the difficulty of creating membranes that have proteins going all the way through them and then studying both ends of those proteins.
To make it easier to study these signaling processes, Schlau-Cohen’s lab uses nanodiscs, a special type of self-assembling membrane that mimics the cell membrane. When making these discs, the researchers can embed receptors in them, allowing the team to study the function of the full-length receptor.
Using a technique called single molecule FRET (fluorescence resonance energy transfer), the researchers can study how the shape of the receptor changes under different conditions. Single molecule FRET allows them to measure the distance between different parts of the protein by labeling them with fluorescent tags and then measuring how fast energy travels between the tags.
In previous work, Schlau-Cohen and Zhang used single molecule FRET and molecular dynamics simulations to reveal what happens when EGFR binds to EGF. They found that this binding causes the transmembrane section of the receptor to change shape, and that shape-shift triggers the section of the receptor that extends inside the cell to activate cellular machinery that stimulates growth.
Stuck in an overactive state
In the new study, the researchers used a similar approach to investigate how altering the composition of the membrane affects the function of the receptor. First, they explored how elevated levels of negatively charged lipids would affect the cell membrane and EGFR function.
Normally, about 15 percent of the cell membrane is made up of negatively charged lipids. The researchers found that membranes with negatively charged lipids in the range of 15 to 30 percent behaved normally, but if that level reached 60 percent, then the EGFR receptor would become locked into an active state.
In that state, the pro-growth signaling pathway is turned on all the time, even when no EGF is bound to the receptor. Many cancer cells show increased levels of these lipids, and this mechanism could help to explain why those cells are able to grow unchecked, Schlau-Cohen says.
“If the membrane has high levels of negatively charged lipids, then it’s always in that open conformation. It doesn’t matter if ligand is bound or unbound,” she says. “It’s always in the conformation that’s telling the cell to grow, not just when EGF binds.”
The researchers also used this system to explore the role of cholesterol in EGFR function. When the researchers created nanodiscs with elevated cholesterol levels, they found that the membranes became more rigid, and this rigidity suppressed EGFR signaling.
The research was funded by the National Institutes of Health and MIT’s Department of Chemistry.
Waves hit different on other planets
On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.
This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.
In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.
The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock.
“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”
The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.
“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”
If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.
“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”
The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.
“The first puff”
When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.
“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”
She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.
“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”
Making waves
The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.
The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.
They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.
“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”
The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.
Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.
The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.
This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.
Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.
“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”
This work was supported, in part, by NASA and the National Science Foundation.
Humanitarian blind spots in Western climate change policy and discourse
Nature Climate Change, Published online: 16 April 2026; doi:10.1038/s41558-026-02613-0
Humanitarian blind spots in Western climate change policy and discourseGeothermal energy turns red hot
Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.”
Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.
“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.
Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.
Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.
This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”
From “minuscule” to “almost inexhaustible” energy
In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.
Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.
The question now, he said, is how to access that resource economically and responsibly.
At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.
The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.
“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”
The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.
Geothermal on the grid
Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.
At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.
This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.
José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.
In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.
“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.
Beyond oil and gas technology
Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.
“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.
Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.
“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”
But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”
One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.
Innovations for deep drilling
Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.
Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.
Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.
Policy as backdrop, not center stage
Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.
For Ruppel, that was by design.
“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”
MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.
