Feed aggregator
War turned Pakistan into a solar power. Will other Asian nations follow?
EPA approves ocean carbon removal test, without mentioning climate
PacifiCorp pares back renewable plans after tax credit repeal
Insurers warn about climate lawsuits against fossil fuel industry
Hochul mulls deferring New York climate ambitions to 2040
California drought, wildfire risks grow as snow falls short
Warming winters lead to more nitrate pollution in drinking water near farms
Brussels unveils change to EU carbon market to fight rising prices
Tesla’s sluggish quarter to reset the new normal for EV sales
Possible US Government iPhone Hacking Tool Leaked
Wired writes (alternate source):
Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers...
MIT researchers measure traffic emissions, to the block, in real-time
In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.
The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.
“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”
Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”
The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.
“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.
The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.
The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.
Manhattan measurements
To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.
The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.
“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.
Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.
For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.
Major emissions drop
On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.
To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.
This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.
“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”
There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.
“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.
The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.
It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.
Evaluating the ethics of autonomous systems
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.
But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?
To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.
The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences.
The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.
“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.
Evaluating ethics
In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.
Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.
Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.
Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.
“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.
Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.
For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.
These ethical criteria may not be well-specified, so they can’t be measured analytically.
The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.
SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.
“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.
Encoding subjectivity
To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.
The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.
“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.
SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.
In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.
For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.
To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.
The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.
“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.
To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.
In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.
This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.
Is “Hackback” Official US Cybersecurity Strategy?
The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.
But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.
The Economist noticed (alternate link) this, too.
I think this is an incredibly dumb idea:
In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty...
Digital Hopes, Real Power: From Revolution to Regulation
This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings.
From Russia—where wartime censorship and more stringent platform controls have choked dissenting voices—to Nigeria, with its aggressive takedown orders turning social media into political battlegrounds, and to Turkey, where sweeping “disinformation” laws have made platforms heavily policed spaces, freedom of expression online is under attack. Per Freedom House’s 2023 Freedom on the Net Report, 66% of internet users live where political or social sites are blocked, and 78% are in countries where people have been arrested for online posts. New social media regulations have emerged in dozens of countries in the past year alone.
The online landscape looks markedly different than it did fifteen years ago. Back then, social media was still new and largely free from legal restrictions: platforms moderated content in response to user reports, governments rarely targeted them directly, and blocks (when they happened) were temporary, with censorship mostly focused on whole websites that VPNs or proxies could easily bypass. The internet was far from free, but governments’ crude tactics left space for circumvention.
Those early restrictions, as crude as they were, marked the start of a rapid evolution in online censorship. Governments like Thailand, which blocked thousands of YouTube videos in 2007 over critical content, and Turkey, which demanded takedowns from YouTube before blocking the site entirely, tested legal and technical pressures to mute dissent and force platforms’ compliance. By 2011, governments weren't just reacting—they had learned to pressure platforms into becoming instruments of state censorship, shifting their playbooks from blunt blocks to sophisticated systems of control that simple VPNs could no longer reliably bypass. Governments across the region were watching closely, and by the time the 2011 uprisings began, they were prepared to respond.
Looking Back
After learning that a Facebook page—We Are All Khaled Said, honoring a young man killed by police brutality—sparked Egypt’s street protests, Western media hailed online platforms as engines of democracy. Revolution co-creator Wael Ghonim told a journalist: “This revolution started on Facebook.” That claim was debated and contested for years; critically, Facebook had suspended the page two months earlier over pseudonyms violating its real-name policy, restoring it only after advocates intervened.
Once the protests moved to the streets, Egypt’s government—alert to social media’s power—quickly blocked Facebook and Twitter, then enacted a near-total shutdown (more on that in part 4 of this series). As history shows, the measures didn’t stop the revolution, and Egyptian president Hosni Mubarak stepped down. For a brief moment, freedom appeared to be on the horizon. Unfortunately, that moment was short-lived.
Egypt’s Digital Dystopia
Just as the Egyptian military government quashed revolution in the streets, they also shut down online civic space. Today, Egypt’s internet ranks low on markers of internet freedom. The military government that has ruled Egypt since 2013 has imprisoned human rights defenders and enacted laws—including 2015’s Counter-terrorism Law and 2018’s Cybercrime Law—that grant the state broad authority to suppress speech and prosecute offenders.
The 2018 law demonstrates the ease with which cybercrime laws can be abused. Article 7 of the law allows for websites that constitute “a threat to national security” or to the “national economy” to be blocked. The Association of Freedom of Thought and Expression (AFTE) has criticized the loose definition of “national security” contained within the law, as “everything related to the independence, stability, security, unity and territorial integrity of the homeland.” Notably, individuals can also be penalized—and sentenced to up to six months imprisonment—for accessing banned websites.
Articles 25, which prohibits the use of technology to “infringe on any family principles or values in Egyptian society,” and 26, which prohibits the dissemination of material that “violates public morals,” have been used in recent years to prosecute young people who use social media in ways in which the government disapproves. Many of those prosecuted have been young women; for instance, belly dancer Sama Al Masry was sentenced to three years in prison and fined 300,000 Egyptian pounds under Article 26.
Beyond Egypt: Regional Trends
Egypt’s trajectory reflects a wider regional and global pattern. In the years following the uprisings, governments moved quickly to formalize legal authority over digital space, often under the banner of combating cybercrime, terrorism, or “false information.” These laws often contain vaguely worded provisions criminalizing “misuse of social media” or “harming national unity,” giving authorities wide discretion to prosecute speech.
In Qatar and Bahrain, a social media post can result in up to five years in jail. In 2018, prominent Bahraini human rights defender Nabeel Rajab was convicted of “spreading false rumours in time of war”, “insulting public authorities”, and “insulting a foreign country” for tweets he posted about the killing of civilians in Yemen and sentenced to five years imprisonment.
Two years later, Qatar amended its penal code by setting criminal penalties for spreading “fake news.” Article 136 (bis) sets criminal penalties for broadcasting, publishing, or republishing “rumors or statements or false or malicious news or sensational propaganda, inside or outside the state, whenever it is intended to harm national interests or incite public opinion or disturb the social or public order of the state” and sets a punishment of a maximum of five years in prison, and/or 100,000 Qatari riyals. The penalty is doubled if the crime is committed in wartime.
Now, as war has once again reached the region, these laws are being put to the test. Bahraini authorities have arrested at least 100 people in relation to protests or expression related to the war, while Qatar has arrested more than 300 people on charges of spreading “misleading information.”
And in the UAE, at least 35 people—most or all of whom are foreign nationals—have been arrested and “accused of spreading misleading and fabricated content online that could harm national defence efforts and fuel public panic,” according to the Times of India. The arrests fall under the UAE’s 2022 Federal Decree Law No. 34 on Combating Rumours and Cybercrimes which—says Human Rights Watch—is, along with the country’s Penal Code, “used to silence dissidents, journalists, activists, and anyone the authorities perceived to be critical of the government, its policies, or its representatives.”
From Regional Practice to Global Pattern
Today roughly four out of five countries worldwide have enacted cybercrime legislation, a dramatic expansion over the past decade, with many governments adopting or revising such laws in the years following the Arab uprisings.
Outside the region, other nations have repurposed these laws to police speech. In Nigeria, journalists have been detained under the Cybercrime Act, with dozens of prosecutions documented since 2015. Bangladesh’s Digital Security Act has been used in thousands of cases—including hundreds against journalists—while in Uganda, authorities have prosecuted political critics under computer misuse laws for social media posts.
Cybercrime laws are only one piece of a broader toolkit that governments now deploy to control digital spaces. Over the past decade, authorities have introduced sweeping “disinformation” laws, platform liability rules, age verification laws, and data localization requirements that force companies to store data domestically or appoint legal representatives within national jurisdictions. These measures give governments leverage over global technology firms, enabling them to demand faster content removals, obtain user data, or threaten steep fines and throttling if platforms fail to comply. Rather than relying solely on blunt instruments like blocking entire websites, states increasingly govern speech through layered regulatory systems that pressure platforms to police users on the state’s behalf.
The platforms too have changed. The same social media companies that were once championed as tools of democratic mobilization now operate in more constrained environments—and often act as willing participants in repressing speech. Facing financial penalties and the prospect of being blocked entirely, many companies expanded compliance with takedown requests after 2011, as can be seen in the companies’ own transparency reports. They later invested heavily in automated technologies that remove vast quantities of content before it is ever publicly available.
Rights groups around the world, including EFF, have warned that these dynamics disproportionately impact historically marginalized and vulnerable groups, as well as journalists and other human rights defenders. Research by the Palestinian digital rights organization 7amleh and reporting by Human Rights Watch have documented how content moderation policies, government pressure, and opaque enforcement mechanisms increasingly converge—leaving activists, journalists, and human rights defenders caught between state censorship and platform governance.
The New Architecture of Repression
Looking back now, it’s clear that, fifteen years ago, governments were caught off guard. They crudely blocked platforms, shut down networks, and scrambled to contain movements they did not fully understand. But in the years since, states have systematically adapted, transforming what were once reactive measures into durable systems of control.
Today’s controls are embedded in law, outsourced to platforms, and justified through the language of security, safety, and order. Cybercrime statutes, disinformation frameworks, and platform regulations form a layered architecture that allows states to shape online expression at scale while maintaining a veneer of legality. In this system, repression is often procedural, bureaucratic, and continuous.
The question is no longer whether the internet can enable dissent, but whether it can still sustain it under these conditions.
This is the second installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. Read the rest of the series here.
