Feed aggregator
Biden admin calls on Supreme Court to reject Vineyard Wind case
United Airlines to help federal scientists monitor US emissions
Young activists take on Florida government agency in climate lawsuit
EU climate strategy risks Yellow Vests-style backlash, ex-official warns
Saudi Arabia hosting World Cup 2034 will harm climate, experts say
UN talks fail to reach deal to address global drought risks
Climate justice discussions need new participants and new audiences
Nature Climate Change, Published online: 17 December 2024; doi:10.1038/s41558-024-02219-4
Climate justice discussions need new participants and new audiencesMIT researchers introduce Boltz-1, a fully open-source model for predicting biomolecular structures
MIT scientists have released a powerful, open-source AI model, called Boltz-1, that could significantly accelerate biomedical research and drug development.
Developed by a team of researchers in the MIT Jameel Clinic for Machine Learning in Health, Boltz-1 is the first fully open-source model that achieves state-of-the-art performance at the level of AlphaFold3, the model from Google DeepMind that predicts the 3D structures of proteins and other biological molecules.
MIT graduate students Jeremy Wohlwend and Gabriele Corso were the lead developers of Boltz-1, along with MIT Jameel Clinic Research Affiliate Saro Passaro and MIT professors of electrical engineering and computer science Regina Barzilay and Tommi Jaakkola. Wohlwend and Corso presented the model at a Dec. 5 event at MIT’s Stata Center, where they said their ultimate goal is to foster global collaboration, accelerate discoveries, and provide a robust platform for advancing biomolecular modeling.
“We hope for this to be a starting point for the community,” Corso said. “There is a reason we call it Boltz-1 and not Boltz. This is not the end of the line. We want as much contribution from the community as we can get.”
Proteins play an essential role in nearly all biological processes. A protein’s shape is closely connected with its function, so understanding a protein’s structure is critical for designing new drugs or engineering new proteins with specific functionalities. But because of the extremely complex process by which a protein’s long chain of amino acids is folded into a 3D structure, accurately predicting that structure has been a major challenge for decades.
DeepMind’s AlphaFold2, which earned Demis Hassabis and John Jumper the 2024 Nobel Prize in Chemistry, uses machine learning to rapidly predict 3D protein structures that are so accurate they are indistinguishable from those experimentally derived by scientists. This open-source model has been used by academic and commercial research teams around the world, spurring many advancements in drug development.
AlphaFold3 improves upon its predecessors by incorporating a generative AI model, known as a diffusion model, which can better handle the amount of uncertainty involved in predicting extremely complex protein structures. Unlike AlphaFold2, however, AlphaFold3 is not fully open source, nor is it available for commercial use, which prompted criticism from the scientific community and kicked off a global race to build a commercially available version of the model.
For their work on Boltz-1, the MIT researchers followed the same initial approach as AlphaFold3, but after studying the underlying diffusion model, they explored potential improvements. They incorporated those that boosted the model’s accuracy the most, such as new algorithms that improve prediction efficiency.
Along with the model itself, they open-sourced their entire pipeline for training and fine-tuning so other scientists can build upon Boltz-1.
“I am immensely proud of Jeremy, Gabriele, Saro, and the rest of the Jameel Clinic team for making this release happen. This project took many days and nights of work, with unwavering determination to get to this point. There are many exciting ideas for further improvements and we look forward to sharing them in the coming months,” Barzilay says.
It took the MIT team four months of work, and many experiments, to develop Boltz-1. One of their biggest challenges was overcoming the ambiguity and heterogeneity contained in the Protein Data Bank, a collection of all biomolecular structures that thousands of biologists have solved in the past 70 years.
“I had a lot of long nights wrestling with these data. A lot of it is pure domain knowledge that one just has to acquire. There are no shortcuts,” Wohlwend says.
In the end, their experiments show that Boltz-1 attains the same level of accuracy as AlphaFold3 on a diverse set of complex biomolecular structure predictions.
“What Jeremy, Gabriele, and Saro have accomplished is nothing short of remarkable. Their hard work and persistence on this project has made biomolecular structure prediction more accessible to the broader community and will revolutionize advancements in molecular sciences,” says Jaakkola.
The researchers plan to continue improving the performance of Boltz-1 and reduce the amount of time it takes to make predictions. They also invite researchers to try Boltz-1 on their GitHub repository and connect with fellow users of Boltz-1 on their Slack channel.
“We think there is still many, many years of work to improve these models. We are very eager to collaborate with others and see what the community does with this tool,” Wohlwend adds.
Mathai Mammen, CEO and president of Parabilis Medicines, calls Boltz-1 a “breakthrough” model. “By open sourcing this advance, the MIT Jameel Clinic and collaborators are democratizing access to cutting-edge structural biology tools,” he says. “This landmark effort will accelerate the creation of life-changing medicines. Thank you to the Boltz-1 team for driving this profound leap forward!”
“Boltz-1 will be enormously enabling, for my lab and the whole community,” adds Jonathan Weissman, an MIT professor of biology and member of the Whitehead Institute for Biomedical Engineering who was not involved in the study. “We will see a whole wave of discoveries made possible by democratizing this powerful tool.” Weissman adds that he anticipates that the open-source nature of Boltz-1 will lead to a vast array of creative new applications.
This work was also supported by a U.S. National Science Foundation Expeditions grant; the Jameel Clinic; the U.S. Defense Threat Reduction Agency Discovery of Medical Countermeasures Against New and Emerging (DOMANE) Threats program; and the MATCHMAKERS project supported by the Cancer Grand Challenges partnership financed by Cancer Research UK and the U.S. National Cancer Institute.
Still Flawed and Lacking Safeguards, UN Cybercrime Treaty Goes Before the UN General Assembly, then States for Adoption
Most UN Member States, including the U.S., are expected to support adoption of the flawed UN Cybercrime Treaty when it’s scheduled to go before the UN General Assembly this week for a vote, despite warnings that it poses dangerous risks to human rights.
EFF and its civil society partners–along with cybersecurity and internet companies, press organizations, the International Chamber of Congress, the United Nations High Commissioner for Human Rights, and others–for years have raised red flags that the treaty authorizes open-ended evidence gathering powers for crimes with little nexus to core cybercrimes, and has minimal safeguards and limitations.
The final draft, unanimously approved in August by over 100 countries that had participated in negotiations, will permit intrusive surveillance practices in the name of engendering cross-border cooperation.
The treaty that will go before the UN General Assembly contains many troubling provisions and omissions that don’t comport with international human rights standards and leave the implementation of human rights safeguards to the discretion of Member States. Many of these Member States have poor track records on human rights and national laws that don’t protect privacy while criminalizing free speech and gender expression.
Thanks to the work of a coalition of civil society groups that included EFF, the U.S. now seems to recognize this potential danger. In a statement by the U.S. Deputy Representative to the Economic and Social Council, the U.S. said it “shares the legitimate concerns” of industry and civil society, which warned that some states could leverage their human rights-challenged national legal frameworks to enable transnational repression.
We expressed grave concerns that the treaty facilitates requests for user data that will enable cross-border spying and the targeting and harassment of those, for example, who expose and work against government corruption and abuse. Our full analysis of the treaty can be found here.
Nonetheless, the U.S. said It will support the convention when it comes up for this vote, noting among other things that its terms don’t permit parties from using it to violate or suppress human rights.
While that’s true as far as it goes, and is important to include in principle, some Member States’ laws empowered by the treaty already fail to meet human rights standards. And the treaty fails to adopt specific safeguards to truly protect human rights.
The safeguards contained in the convention, such as the need for judicial review in the chapter on procedural measures in criminal investigations, are undermined by being potentially discretionary and contingent on state’s domestic laws. In many countries, these domestic laws don’t require judicial authorization based on reasonable suspicion for surveillance and or real-time collection of traffic.
For example, our partner Access Now points out that in Algeria, Lebanon, Palestine, Tunisia, and Egypt, cybercrime laws require telecommunications service providers to preemptively and systematically collect large amounts of user data without judicial authorization.
Meanwhile, Jordan’s cybercrime law has been used against LGBTQ+ people, journalists, human rights defenders, and those criticizing the government.
The U.S. says it is committed to combating human rights abuses by governments that misuse national cybercrime statues and tools to target journalists and activists. Implementing the treaty, it says, must be paired with robust domestic safeguards and oversight.
It’s hard to imagine that governments will voluntarily revise cybercrime laws as they ratify and implement the treaty; what’s more realistic is that the treaty normalizes such frameworks
Advocating for improvements during the two years-long negotiations was a tough slog. And while the final version is highly problematic, civil society achieved some wins. An early negotiating document named 34 purported cybercrime offenses to be included, many of which would criminalize forms of speech. Civil society warned of the dangers of including speech-related offenses; the list was dropped in later drafts.
Civil society advocacy also helped secure specific language in the general provision article on human rights specifying that protection of fundamental rights includes freedom of expression, opinion, religion, conscience, and peaceful assembly. Left off the list, though, was gender expression.
The U.S., meanwhile, has called on all states “to take necessary steps within their domestic legal systems to ensure the Convention will not be applied in a manner inconsistent with human rights obligations, including those relating to speech, political dissent, and sexual identity.”
Furthermore, the U.S. government pledges to demand accountability – without saying how it will do so – if states seek to misuse the treaty to suppress human rights. “We will demand accountability for States who try to abuse this Convention to target private companies’ employees, good-faith cybersecurity researchers, journalists, dissidents, and others.” Yet the treaty contains no oversight provisions.
The U.S. said it is unlikely to sign or ratify the treaty “unless and until we see implementation of meaningful human rights and other legal protections by the convention’s signatories.”
We’ll hold the government to its word on this and on its vows to seek accountability. But ultimately, the destiny of the U.S declarations and the treaty’s impact in the U.S are more than uncertain under a second Trump administration, as ratification would require both the Senate’s consent and the President's formal ratification.
Trump withdrew from climate, trade, and arms agreements in his first term, so signing the UN Cybercrime Treaty may not be in the cards – a positive outcome, though probably not motivated by concerns for human rights.
Meanwhile, we urge states to vote against adoption this week and not ratify the treaty at home. The document puts global human rights at risk. In a rush to to win consensus, negotiators gave Member States lots of leeway to avoid human rights safeguards in their “criminal” investigations, and now millions of people around the world might pay a high price.
Aurora mapping across North America
As seen across North America at sometimes surprisingly low latitudes, brilliant auroral displays provide evidence of solar activity in the night sky. More is going on than the familiar visible light shows during these events, though: When aurora appear, the Earth’s ionosphere is experiencing an increase in ionization and total electron content (TEC) due to energetic electrons and ions precipitating into the ionosphere.
One extreme auroral event earlier this year (May 10–11) was the Gannon geomagnetic “superstorm,” named in honor of researcher Jennifer Gannon, who suddenly passed away May 2. During the Gannon storm, both MIT Haystack Observatory researchers and citizen scientists across the United States observed the effects of this event on the Earth’s ionosphere, as detailed in the open-access paper “Imaging the May 2024 Extreme Aurora with Ionospheric Total Electron Content,” which was published Oct. 14 in the journal Geophysical Research Letters. Contributing citizen scientists featured co-author Daniel Bush, who recorded and livestreamed the entire auroral event from his amateur observatory in Albany, Missouri, and included numerous citizen observers recruited via social media.
Citizen science or community science involves members of the general public who volunteer their time to contribute, often at a significant level, to scientific investigations, including observations, data collection, development of technology, and interpreting results and analysis. Professional scientists are not the only people who perform research. The collaborative work of citizen scientists not only supports stronger scientific results, but also improves the transparency of scientific work on issues of importance to the entire population and increases STEM involvement across many groups of people who are not professional scientists in these fields.
Haystack collected data for this study from a dense network of GNSS (Global Navigation Satellite System, including systems like GPS) receivers across the United States, which monitor changes in ionospheric TEC variations on a time scale of less than a minute. In this study, John Foster and colleagues mapped the auroral effects during the Gannon storm in terms of TEC changes, and worked with citizen scientists to confirm auroral expansion with still photo and video observations.
Both the TEC observations and the procedural incorporation of synchronous imagery from citizen scientists were groundbreaking; this is the first use of precipitation-produced ionospheric TEC to map the occurrence and evolution of a strong auroral display on a continental scale. Lead author Foster says, “These observations validate the TEC mapping technique for detailed auroral studies, and provided groundbreaking detection of strong isolated bursts of precipitation-produced ionization associated with rapid intensification and expansion of auroral activity.”
Haystack scientists also linked their work with citizen observations posted to social media to support the TEC measurements made via the GNSS receiver network. This color imagery and very high TEC levels lead to the finding that the intense red aurora was co-located with the leading edge of the equator-ward and westward increasing TEC levels, indicating that the TEC enhancement was created by intense low-energy electron precipitation following the geomagnetic superstorm. This storm was exceptionally strong, with auroral activity centered relatively rarely at mid latitudes. Processes in the stormtime magnetosphere were the immediate cause of the auroral and ionospheric disturbances. These, in turn, were driven by the preceding solar coronal mass ejection and the interaction of the highly disturbed solar wind with Earth's outer magnetosphere. The ionospheric observations reported in this paper are parts of this global system of interactions, and their characteristics can be used to better understand our coupled atmospheric system.
Co-author and amateur astronomer Daniel Bush says, “It is not uncommon for ‘citizen scientists’ such as myself to contribute to major scientific research by supplying observations of natural phenomena seen in the skies above Earth. Astronomy and geospace sciences are a couple of scientific disciplines in which amateurs such as myself can still contribute greatly without leaving their backyards. I am so proud that some of my work has proven to be of value to a formal study.” Despite his modest tone in discussing his contributions, his work was essential in reaching the scientific conclusions of the Haystack researchers’ study.
Knowledge of this complex system is more than an intellectual study; TEC structure and ionospheric activity are of serious space weather concern for satellite-based communication and navigation systems. The sharp TEC gradients and variability observed in this study are particularly significant when occurring in the highly populated mid latitudes, as seen across the United States in the May 2024 superstorm and more recent auroral events.
A new method to detect dehydration in plants
Have you ever wondered if your plants were dry and dehydrated, or if you’re not watering them enough? Farmers and green-fingered enthusiasts alike may soon have a way to find this out in real-time.
Over the past decade, researchers have been working on sensors to detect a wide range of chemical compounds, and a critical bottleneck has been developing sensors that can be used within living biological systems. This is all set to change with new sensors by the Singapore-MIT Alliance for Research and Technology (SMART) that can detect pH changes in living plants — an indicator of drought stress in plants — and enable the timely detection and management of drought stress before it leads to irreversible yield loss.
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of SMART, MIT’s research enterprise in Singapore, in collaboration with Temasek Life Sciences Laboratory and MIT, have pioneered the world’s first covalent organic framework (COF) sensors integrated within silk fibroin (SF) microneedles for in-planta detection of physiological pH changes. This advanced technology can detect a reduction in acidity in plant xylem tissues, providing early warning of drought stress in plants up to 48 hours before traditional methods.
Drought — or a lack of water — is a significant stressor that leads to lower yield by affecting key plant metabolic pathways, reducing leaf size, stem extension, and root proliferation. If prolonged, it can eventually cause plants to become discolored, wilt, and die. As agricultural challenges — including those posed by climate change, rising costs, and lack of land space — continue to escalate and adversely affect crop production and yield, farmers are often unable to implement proactive measures or pre-symptomatic diagnosis for early and timely intervention. This underscores the need for improved sensor integration that can facilitate in-vivo assessments and timely interventions in agricultural practices.
“This type of sensor can be easily attached to the plant and queried with simple instrumentation. It can therefore bring powerful analyses, like the tools we are developing within DISTAP, into the hands of farmers and researchers alike,” says Professor Michael Strano, co-corresponding author, DiSTAP co-lead principal investigator, and the Carbon P. Dubbs Professor of Chemical Engineering at MIT.
SMART’s breakthrough addresses a long-standing challenge for COF-based sensors, which were — until now — unable to interact with biological tissues. COFs are networks of organic molecules or polymers — which contain carbon atoms bonded to elements like hydrogen, oxygen, or nitrogen — arranged into consistent, crystal-like structures, which change color according to different pH levels. As drought stress can be detected through pH level changes in plant tissues, this novel COF-based sensor allows early detection of drought stress in plants through real-time measuring of pH levels in plant xylem tissues. This method could help farmers optimize crop production and yield amid evolving climate patterns and environmental conditions.
“The COF-silk sensors provide an example of new tools that are required to make agriculture more precise in a world that strives to increase global food security under the challenges imposed by climate change, limited resources, and the need to reduce the carbon footprint. The seamless integration between nanosensors and biomaterials enables the effortless measurement of plant fluids’ key parameters, such as pH, that in turn allows us to monitor plant health,” says Professor Benedetto Marelli, co-corresponding author, principal investigator at DiSTAP, and associate professor of civil and environmental engineering at MIT.
In an open-access paper titled, “Chromatic Covalent Organic Frameworks Enabling In-Vivo Chemical Tomography” recently published in Nature Communications, DiSTAP researchers documented their groundbreaking work, which demonstrated the real-time detection of pH changes in plant tissues. Significantly, this method allows in-vivo 3D mapping of pH levels in plant tissues using only a smartphone camera, offering a minimally invasive approach to exploring previously inaccessible environments compared to slower and more destructive traditional optical methods.
DiSTAP researchers designed and synthesized four COF compounds that showcase tunable acid chromism — color changes associated with changing pH levels — with SF microneedles coated with a layer of COF film made of these compounds. In turn, the transparency of SF microneedles and COF film allows in-vivo observation and visualization of pH spatial distributions through changes in the pH-sensitive colors.
“Building on our previous work with biodegradable COF-SF films capable of sensing food spoilage, we’ve developed a method to detect pH changes in plant tissues. When used in plants, the COF compounds will transition from dark red to red as the pH increases in the xylem tissues, indicating that the plants are experiencing drought stress and require early intervention to prevent yield loss,” says Song Wang, research scientist at SMART DiSTAP and co-first author.
“SF microneedles are robust and can be designed to remain stable even when interfacing with biological tissues. They are also transparent, which allows multidimensional mapping in a minimally invasive manner. Paired with the COF films, farmers now have a precision tool to monitor plant health in real time and better address challenges like drought and improve crop resilience,” says Yangyang Han, senior postdoc at SMART DiSTAP and co-first author.
This study sets the foundation for future design and development for COF-SF microneedle-based tomographic chemical imaging of plants with COF-based sensors. Building on this research, DiSTAP researchers will work to advance this innovative technology beyond pH detection, with a focus on sensing a broad spectrum of biologically relevant analytes such as plant hormones and metabolites.
The research is conducted by SMART and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program.
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
“I really need your help, as I am too scared to talk to a therapist and I can’t reach one anyways.”
“Am I overreacting, getting hurt about husband making fun of me to his friends?”
“Could some strangers please weigh in on my life and decide my future for me?”
The above quotes are real posts taken from users on Reddit, a social media news website and forum where users can share content or ask for advice in smaller, interest-based forums known as “subreddits.”
Using a dataset of 12,513 posts with 70,429 responses from 26 mental health-related subreddits, researchers from MIT, New York University (NYU), and University of California Los Angeles (UCLA) devised a framework to help evaluate the equity and overall quality of mental health support chatbots based on large language models (LLMs) like GPT-4. Their work was recently published at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
Mental health support chatbots have long been explored as a way of improving access to mental health support, but powerful LLMs like OpenAI’s ChatGPT are transforming human-AI interaction, with AI-generated responses becoming harder to distinguish from the responses of real humans.
Despite this remarkable progress, the unintended consequences of AI-provided mental health support have drawn attention to its potentially deadly risks; in March of last year, a Belgian man died by suicide as a result of an exchange with ELIZA, a chatbot developed to emulate a psychotherapist powered with an LLM called GPT-J. One month later, the National Eating Disorders Association would suspend their chatbot Tessa, after the chatbot began dispensing dieting tips to patients with eating disorders.
Saadia Gabriel, a recent MIT postdoc who is now a UCLA assistant professor and first author of the paper, admitted that she was initially very skeptical of how effective mental health support chatbots could actually be. Gabriel conducted this research during her time as a postdoc at MIT in the Healthy Machine Learning Group, led Marzyeh Ghassemi, an MIT associate professor in the Department of Electrical Engineering and Computer Science and MIT Institute for Medical Engineering and Science who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and the Computer Science and Artificial Intelligence Laboratory.
What Gabriel and the team of researchers found was that GPT-4 responses were not only more empathetic overall, but they were 48 percent better at encouraging positive behavioral changes than human responses.
However, in a bias evaluation, the researchers found that GPT-4’s response empathy levels were reduced for Black (2 to 15 percent lower) and Asian posters (5 to 17 percent lower) compared to white posters or posters whose race was unknown.
To evaluate bias in GPT-4 responses and human responses, researchers included different kinds of posts with explicit demographic (e.g., gender, race) leaks and implicit demographic leaks.
An explicit demographic leak would look like: “I am a 32yo Black woman.”
Whereas an implicit demographic leak would look like: “Being a 32yo girl wearing my natural hair,” in which keywords are used to indicate certain demographics to GPT-4.
With the exception of Black female posters, GPT-4’s responses were found to be less affected by explicit and implicit demographic leaking compared to human responders, who tended to be more empathetic when responding to posts with implicit demographic suggestions.
“The structure of the input you give [the LLM] and some information about the context, like whether you want [the LLM] to act in the style of a clinician, the style of a social media post, or whether you want it to use demographic attributes of the patient, has a major impact on the response you get back,” Gabriel says.
The paper suggests that explicitly providing instruction for LLMs to use demographic attributes can effectively alleviate bias, as this was the only method where researchers did not observe a significant difference in empathy across the different demographic groups.
Gabriel hopes this work can help ensure more comprehensive and thoughtful evaluation of LLMs being deployed in clinical settings across demographic subgroups.
“LLMs are already being used to provide patient-facing support and have been deployed in medical settings, in many cases to automate inefficient human systems,” Ghassemi says. “Here, we demonstrated that while state-of-the-art LLMs are generally less affected by demographic leaking than humans in peer-to-peer mental health support, they do not provide equitable mental health responses across inferred patient subgroups ... we have a lot of opportunity to improve models so they provide improved support when used.”
New climate chemistry model finds “non-negligible” impacts of potential hydrogen fuel leakage
As the world looks for ways to stop climate change, much discussion focuses on using hydrogen instead of fossil fuels, which emit climate-warming greenhouse gases (GHGs) when they’re burned. The idea is appealing. Burning hydrogen doesn’t emit GHGs to the atmosphere, and hydrogen is well-suited for a variety of uses, notably as a replacement for natural gas in industrial processes, power generation, and home heating.
But while burning hydrogen won’t emit GHGs, any hydrogen that’s leaked from pipelines or storage or fueling facilities can indirectly cause climate change by affecting other compounds that are GHGs, including tropospheric ozone and methane, with methane impacts being the dominant effect. A much-cited 2022 modeling study analyzing hydrogen’s effects on chemical compounds in the atmosphere concluded that these climate impacts could be considerable. With funding from the MIT Energy Initiative’s Future Energy Systems Center, a team of MIT researchers took a more detailed look at the specific chemistry that poses the risks of using hydrogen as a fuel if it leaks.
The researchers developed a model that tracks many more chemical reactions that may be affected by hydrogen and includes interactions among chemicals. Their open-access results, published Oct. 28 in Frontiers in Energy Research, showed that while the impact of leaked hydrogen on the climate wouldn’t be as large as the 2022 study predicted — and that it would be about a third of the impact of any natural gas that escapes today — leaked hydrogen will impact the climate. Leak prevention should therefore be a top priority as the hydrogen infrastructure is built, state the researchers.
Hydrogen’s impact on the “detergent” that cleans our atmosphere
Global three-dimensional climate-chemistry models using a large number of chemical reactions have also been used to evaluate hydrogen’s potential climate impacts, but results vary from one model to another, motivating the MIT study to analyze the chemistry. Most studies of the climate effects of using hydrogen consider only the GHGs that are emitted during the production of the hydrogen fuel. Different approaches may make “blue hydrogen” or “green hydrogen,” a label that relates to the GHGs emitted. Regardless of the process used to make the hydrogen, the fuel itself can threaten the climate. For widespread use, hydrogen will need to be transported, distributed, and stored — in short, there will be many opportunities for leakage.
The question is, What happens to that leaked hydrogen when it reaches the atmosphere? The 2022 study predicting large climate impacts from leaked hydrogen was based on reactions between pairs of just four chemical compounds in the atmosphere. The results showed that the hydrogen would deplete a chemical species that atmospheric chemists call the “detergent of the atmosphere,” explains Candice Chen, a PhD candidate in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It goes around zapping greenhouse gases, pollutants, all sorts of bad things in the atmosphere. So it’s cleaning our air.” Best of all, that detergent — the hydroxyl radical, abbreviated as OH — removes methane, which is an extremely potent GHG in the atmosphere. OH thus plays an important role in slowing the rate at which global temperatures rise. But any hydrogen leaked to the atmosphere would reduce the amount of OH available to clean up methane, so the concentration of methane would increase.
However, chemical reactions among compounds in the atmosphere are notoriously complicated. While the 2022 study used a “four-equation model,” Chen and her colleagues — Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry; and Kane Stone, a research scientist in EAPS — developed a model that includes 66 chemical reactions. Analyses using their 66-equation model showed that the four-equation system didn’t capture a critical feedback involving OH — a feedback that acts to protect the methane-removal process.
Here’s how that feedback works: As the hydrogen decreases the concentration of OH, the cleanup of methane slows down, so the methane concentration increases. However, that methane undergoes chemical reactions that can produce new OH radicals. “So the methane that’s being produced can make more of the OH detergent,” says Chen. “There’s a small countering effect. Indirectly, the methane helps produce the thing that’s getting rid of it.” And, says Chen, that’s a key difference between their 66-equation model and the four-equation one. “The simple model uses a constant value for the production of OH, so it misses that key OH-production feedback,” she says.
To explore the importance of including that feedback effect, the MIT researchers performed the following analysis: They assumed that a single pulse of hydrogen was injected into the atmosphere and predicted the change in methane concentration over the next 100 years, first using four-equation model and then using the 66-equation model. With the four-equation system, the additional methane concentration peaked at nearly 2 parts per billion (ppb); with the 66-equation system, it peaked at just over 1 ppb.
Because the four-equation analysis assumes only that the injected hydrogen destroys the OH, the methane concentration increases unchecked for the first 10 years or so. In contrast, the 66-equation analysis goes one step further: the methane concentration does increase, but as the system re-equilibrates, more OH forms and removes methane. By not accounting for that feedback, the four-equation analysis overestimates the peak increase in methane due to the hydrogen pulse by about 85 percent. Spread over time, the simple model doubles the amount of methane that forms in response to the hydrogen pulse.
Chen cautions that the point of their work is not to present their result as “a solid estimate” of the impact of hydrogen. Their analysis is based on a simple “box” model that represents global average conditions and assumes that all the chemical species present are well mixed. Thus, the species can vary over time — that is, they can be formed and destroyed — but any species that are present are always perfectly mixed. As a result, a box model does not account for the impact of, say, wind on the distribution of species. “The point we're trying to make is that you can go too simple,” says Chen. “If you’re going simpler than what we're representing, you will get further from the right answer.” She goes on to note, “The utility of a relatively simple model like ours is that all of the knobs and levers are very clear. That means you can explore the system and see what affects a value of interest.”
Leaked hydrogen versus leaked natural gas: A climate comparison
Burning natural gas produces fewer GHG emissions than does burning coal or oil; but as with hydrogen, any natural gas that’s leaked from wells, pipelines, and processing facilities can have climate impacts, negating some of the perceived benefits of using natural gas in place of other fossil fuels. After all, natural gas consists largely of methane, the highly potent GHG in the atmosphere that’s cleaned up by the OH detergent. Given its potency, even small leaks of methane can have a large climate impact.
So when thinking about replacing natural gas fuel — essentially methane — with hydrogen fuel, it’s important to consider how the climate impacts of the two fuels compare if and when they’re leaked. The usual way to compare the climate impacts of two chemicals is using a measure called the global warming potential, or GWP. The GWP combines two measures: the radiative forcing of a gas — that is, its heat-trapping ability — with its lifetime in the atmosphere. Since the lifetimes of gases differ widely, to compare the climate impacts of two gases, the convention is to relate the GWP of each one to the GWP of carbon dioxide.
But hydrogen and methane leakage cause increases in methane, and that methane decays according to its lifetime. Chen and her colleagues therefore realized that an unconventional procedure would work: they could compare the impacts of the two leaked gases directly. What they found was that the climate impact of hydrogen is about three times less than that of methane (on a per mass basis). So switching from natural gas to hydrogen would not only eliminate combustion emissions, but also potentially reduce the climate effects, depending on how much leaks.
Key takeaways
In summary, Chen highlights some of what she views as the key findings of the study. First on her list is the following: “We show that a really simple four-equation system is not what should be used to project out the atmospheric response to more hydrogen leakages in the future.” The researchers believe that their 66-equation model is a good compromise for the number of chemical reactions to include. It generates estimates for the GWP of methane “pretty much in line with the lower end of the numbers that most other groups are getting using much more sophisticated climate chemistry models,” says Chen. And it’s sufficiently transparent to use in exploring various options for protecting the climate. Indeed, the MIT researchers plan to use their model to examine scenarios that involve replacing other fossil fuels with hydrogen to estimate the climate benefits of making the switch in coming decades.
The study also demonstrates a valuable new way to compare the greenhouse effects of two gases. As long as their effects exist on similar time scales, a direct comparison is possible — and preferable to comparing each with carbon dioxide, which is extremely long-lived in the atmosphere. In this work, the direct comparison generates a simple look at the relative climate impacts of leaked hydrogen and leaked methane — valuable information to take into account when considering switching from natural gas to hydrogen.
Finally, the researchers offer practical guidance for infrastructure development and use for both hydrogen and natural gas. Their analyses determine that hydrogen fuel itself has a “non-negligible” GWP, as does natural gas, which is mostly methane. Therefore, minimizing leakage of both fuels will be necessary to achieve net-zero carbon emissions by 2050, the goal set by both the European Commission and the U.S. Department of State. Their paper concludes, “If used nearly leak-free, hydrogen is an excellent option. Otherwise, hydrogen should only be a temporary step in the energy transition, or it must be used in tandem with carbon-removal steps [elsewhere] to counter its warming effects.”
Saving the Internet in Europe: How EFF Works in Europe
This post is part one in a series of posts about EFF’s work in Europe.
EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.
In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.
Why EFF Works in Europe
European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.
Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.
Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.
Our Teams
Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.
But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.
Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.
EFF on Stage
In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.
There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.
Lara Ozkan named 2025 Marshall Scholar
Lara Ozkan, an MIT senior from Oradell, New Jersey, has been selected as a 2025 Marshall Scholar and will begin graduate studies in the United Kingdom next fall. Funded by the British government, the Marshall Scholarship awards American students of high academic achievement with the opportunity to pursue graduate studies in any field at any university in the U.K. Up to 50 scholarships are granted each year.
“We are so proud that Lara will be representing MIT in the U.K.,” says Kim Benard, associate dean of distinguished fellowships. “Her accomplishments to date have been extraordinary and we are excited to see where her future work goes.” Ozkan, along with MIT’s other endorsed Marshall candidates, was mentored by the distinguished fellowships team in Career Advising and Professional Development, and the Presidential Committee on Distinguished Fellowships, co-chaired by professors Nancy Kanwisher and Tom Levenson.
Ozkan, a senior majoring in computer science and molecular biology, plans to pursue through her Marshall Scholarship an MPhil in biological science at Cambridge University’s Sanger Institute, followed by a master’s by research degree in artificial intelligence and machine learning at Imperial College London. She is committed to a career advancing women’s health through innovation in technology and the application of computational tools to research.
Prior to beginning her studies at MIT, Ozkan conducted computational biology research at Cold Spring Harbor Laboratory. At MIT, she has been an undergraduate researcher with the MIT Media Lab’s Conformable Decoders group, where she has worked on breast cancer wearable ultrasound technologies. She also contributes to Professor Manolis Kellis’ computational biology research group in the MIT Computer Science and Artificial Intelligence Laboratory. Ozkan’s achievements in computational biology research earned her the MIT Susan Hockfield Prize in Life Sciences.
At the MIT Schwarzman College of Computing, Ozkan has examined the ethical implications of genomics projects and developed AI ethics curricula for MIT computer science courses. Through internships with Accenture Gen AI Risk and pharmaceutical firms, she gained practical insights into responsible AI use in health care.
Ozkan is president and executive director of MIT Capital Partners, an organization that connects the entrepreneurship community with venture capital firms, and she is president of the MIT Sloan Business Club. Additionally, she serves as an undergraduate research peer ambassador and is a member of the MIT EECS Committee on Diversity, Equity, and Inclusion. As part of the MIT Schwarzman College of Computing Undergraduate Advisory Group, she advises on policies and programming to improve the student experience in interdisciplinary computing.
Beyond Ozkan’s research roles, she volunteers with MIT CodeIt, teaching middle-school girls computer science. As a counselor with Camp Kesem, she mentors children whose parents are impacted by cancer.
Short-Lived Certificates Coming to Let’s Encrypt
Starting next year:
Our longstanding offering won’t fundamentally change next year, but we are going to introduce a new offering that’s a big shift from anything we’ve done before—short-lived certificates. Specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event.
Because we’ve done so much to encourage automation over the past decade, most of our subscribers aren’t going to have to do much in order to switch to shorter lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20x as many certificates as we do now. It’s not inconceivable that at some point in our next decade we may need to be prepared to issue 100,000,000 certificates per day...