Feed aggregator

DOT aims to keep wind turbines away from railroads, highways

ClimateWire News - Tue, 08/19/2025 - 6:18am
The Transportation secretary says the effort is about safety, but some rail experts and the wind industry say there's little evidence for that.

White House plans to shut down board probing deadly steel mill blast

ClimateWire News - Tue, 08/19/2025 - 6:16am
The U.S. Chemical Safety board is investigating an Aug. 11 explosion that killed two people and injured 10 at a U.S. Steel plant.

Vermont defends its climate ‘Superfund’ law from Trump attacks

ClimateWire News - Tue, 08/19/2025 - 6:14am
The state told the court that the administration is “wrong on the law.”

Guide aims to build confidence in the voluntary carbon market

ClimateWire News - Tue, 08/19/2025 - 6:13am
An international nonprofit has helped carbon markets in Peru, Kenya, Pakistan and Mexico. Its guide is aimed at "ensuring integrity."

Coastal towns restore marshes, dunes, reefs to offset rising seas

ClimateWire News - Tue, 08/19/2025 - 6:12am
Communities across the nation are also building flood walls, berms and levees to protect areas that lack adequate natural protection.

Over 150 people missing after devastating floods in Pakistan

ClimateWire News - Tue, 08/19/2025 - 6:12am
A changing climate has made residents of northern Pakistan's river-carved mountainous areas more vulnerable to sudden, heavy rains.

Spain’s prime minister urges national climate pact as fires rage

ClimateWire News - Tue, 08/19/2025 - 6:11am
Pedro Sanchez said he’ll present a plan to enhance nationwide coordination to improve the country’s readiness for climate disasters.

Cloudbursts are ravaging India and Pakistan. What are they?

ClimateWire News - Tue, 08/19/2025 - 6:10am
The sudden, violent storms dump large volumes of rain in a short time — usually about 4 inches within an hour over a localized area.

A new model predicts how molecules will dissolve in different solvents

MIT Latest News - Tue, 08/19/2025 - 5:00am

Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.

The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.

“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.

The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.

“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”

William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.

Solving solubility

The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.

In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.

That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.

“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.

Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.

Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.

One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.

The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.

The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.

“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.

Accurate predictions

The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.

“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”

The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.

“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.

Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.

“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”

The research was funded, in part, by the U.S. Department of Energy.

Victory! Pen-Link's Police Tools Are Not Secret

EFF: Updates - Tue, 08/19/2025 - 12:40am

In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office.

The settlement ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law.

Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link went to court to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later entered the case to obtain the records it requested.  

The Records Show the Sheriff Bought Online Monitoring Tools

The records disclosed in the settlement show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.”

Have you been blocked from receiving similar information? We’d like to hear from you.

The intelligence platform overall has been described in other documents as analyzing data from the “open, deep, and dark web, to mobile and social.” And Webloc has been described as a platform that “provides access to vast amounts of location-based data in any specified geographic location.” Journalists at multiple news outlets have chronicled Pen-Link's technology and have published Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology.

The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s more traditional products that help law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval. 

Government Surveillance Tools Are Not Trade Secrets

The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets.

Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, Pen-Link argued that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate.

Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, Pen-Link cited terms of service that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them.

Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in previous public records disclosures, product descriptions, trademark applications, and government websites.

 Lessons Learned

Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement.

Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company can frustrate public access by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.

 Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. That is why EFF required some attorneys’ fees to be part of the final settlement.

Related Cases: Pen-Link v. County of San Joaquin Sheriff’s Office

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

EFF: Updates - Mon, 08/18/2025 - 5:01pm

The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense.

The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court. 

Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls”

Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge.  It’s easy to see why these subpoenas are prone to misuse.

Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one.

Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief.

As the Ninth Circuit held:

Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law.

This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief:

[This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers.

EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.

From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet

EFF: Updates - Mon, 08/18/2025 - 4:00pm

If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. As of July 1st, South Dakota and Wyoming enacted laws requiring any website that hosts any sexual content to implement age verification measures. These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes & Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age.

These laws expand on the flawed logic from last month’s troubling Supreme Court decision,  Free Speech Coalition v. Paxton, which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site.

Although these laws are in effect, we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content.

The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands

But lawmakers, prosecutors, and activists in conservative states have worked for years to aggressively expand the definition of “harmful to minors” and use other methods to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like The Bluest Eye by Toni Morrison, The Handmaid’s Tale by Margaret Atwood, and And Tango Makes Three have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references.

Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most other state age-verification laws often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and some include a private right of action allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in Louisiana and Utah that rely entirely on civil enforcement. 

This is a textbook example of a “heckler’s veto,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way in a deliberate effort to sidestep state enforcement and avoid an early constitutional court challenge, as many other bounty laws targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship.

That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. 

This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, as Pornhub has already done. And should sites impose age-verification schemes to comply, they will be a speech and privacy disaster for all state residents.

And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit”  as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint. 

If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship.

Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship.

Ready to join us? Urge your state lawmakers to reject harmful age-verification laws. Call or email your representatives to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.

Researchers glimpse the inner workings of protein language models

MIT Latest News - Mon, 08/18/2025 - 3:00pm

Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 

Eavesdropping on Phone Conversations Through Vibrations

Schneier on Security - Mon, 08/18/2025 - 7:02am

Researchers have managed to eavesdrop on cell phone voice conversations by using radar to detect vibrations. It’s more a proof of concept than anything else. The radar detector is only ten feet away, the setup is stylized, and accuracy is poor. But it’s a start.

Trump team readies more attacks on mainstream climate science

ClimateWire News - Mon, 08/18/2025 - 6:21am
The plans include a public debate on global warming. Scientists say that falsely implies the major tenets of climate research are unsettled.

How Trump’s tax plan for renewables will remake US energy

ClimateWire News - Mon, 08/18/2025 - 6:20am
New Treasury guidance rewrites decades-old standards for how wind and solar projects can qualify for lucrative credits.

Trump threatens to use tariffs to derail global climate measure

ClimateWire News - Mon, 08/18/2025 - 6:19am
The administration is escalating its attacks against a carbon tax for shipping emissions.

Biden EPA official challenges legality of DOE climate report

ClimateWire News - Mon, 08/18/2025 - 6:18am
Chris Frey has formally asked the Department of Energy to correct the report, which EPA is using to upend established climate science.

Texas solar manufacturer inks deal for US components

ClimateWire News - Mon, 08/18/2025 - 6:17am
The move shows how some panel-makers are adjusting to stricter requirements under President Donald Trump.

Florida DOGE targets local climate programs

ClimateWire News - Mon, 08/18/2025 - 6:16am
The state is demanding cities and counties detail "green new deal" spending, including funds for EVs, sustainable buildings and solar power.

Pages