Feed aggregator
EPA to propose rolling back climate rule for power plants Wednesday
Recovering from the past and transitioning to a better energy future
As the frequency and severity of extreme weather events grow, it may become increasingly necessary to employ a bolder approach to climate change, warned Emily A. Carter, the Gerhard R. Andlinger Professor in Energy and the Environment at Princeton University. Carter made her case for why the energy transition is no longer enough in the face of climate change while speaking at the MIT Energy Initiative (MITEI) Presents: Advancing the Energy Transition seminar on the MIT campus.
“If all we do is take care of what we did in the past — but we don’t change what we do in the future — then we’re still going to be left with very serious problems,” she said. Our approach to climate change mitigation must comprise transformation, intervention, and adaption strategies, said Carter.
Transitioning to a decarbonized electricity system is one piece of the puzzle. Growing amounts of solar and wind energy — along with nuclear, hydropower, and geothermal — are slowly transforming the energy electricity landscape, but Carter noted that there are new technologies farther down the pipeline.
“Advanced geothermal may come on in the next couple of decades. Fusion will only really start to play a role later in the century, but could provide firm electricity such that we can start to decommission nuclear,” said Carter, who is also a senior strategic advisor and associate laboratory director at the Department of Energy’s Princeton Plasma Physics Laboratory.
Taking this a step further, Carter outlined how this carbon-free electricity should then be used to electrify everything we can. She highlighted the industrial sector as a critical area for transformation: “The energy transition is about transitioning off of fossil fuels. If you look at the manufacturing industries, they are driven by fossil fuels right now. They are driven by fossil fuel-driven thermal processes.” Carter noted that thermal energy is much less efficient than electricity and highlighted electricity-driven strategies that could replace heat in manufacturing, such as electrolysis, plasmas, light-emitting diodes (LEDs) for photocatalysis, and joule heating.
The transportation sector is also a key area for electrification, Carter said. While electric vehicles have become increasingly common in recent years, heavy-duty transportation is not as easily electrified. The solution? “Carbon-neutral fuels for heavy-duty aviation and shipping,” she said, emphasizing that these fuels will need to become part of the circular economy. “We know that when we burn those fuels, they’re going to produce CO2 [carbon dioxide] again. They need to come from a source of CO2 that is not fossil-based.”
The next step is intervention in the form of carbon dioxide removal, which then necessitates methods of storage and utilization, according to Carter. “There’s a lot of talk about building large numbers of pipelines to capture the CO2 — from fossil fuel-driven power plants, cement plants, steel plants, all sorts of industrial places that emit CO2 — and then piping it and storing it in underground aquifers,” she explained. Offshore pipelines are much more expensive than those on land, but can mitigate public concerns over their safety. Europe is exclusively focusing their efforts offshore for this very reason, and the same could be true for the United States, Carter said.
Once carbon dioxide is captured, commercial utilization may provide economic leverage to accelerate sequestration, even if only a few gigatons are used per year, Carter noted. Through mineralization, CO2 can be converted into carbonates, which could be used in building materials such as concrete and road-paving materials.
There is another form of intervention that Carter currently views as a last resort: solar geoengineering, sometimes known as solar radiation management or SRM. In 1991, Mount Pinatubo in the Philippines erupted and released sulfur dioxide into the stratosphere, which caused a temporary cooling of the Earth by approximately 0.5 degree Celsius for over a year. SRM seeks to recreate that cooling effect by injecting particles into the atmosphere that reflect sunlight. According to Carter, there are three main strategies: stratospheric aerosol injection, cirrus cloud thinning (thinning clouds to let more infrared radiation emitted by the earth escape to space), and marine cloud brightening (brightening clouds with sea salt so they reflect more light).
“My view is, I hope we don't ever have to do it, but I sure think we should understand what would happen in case somebody else just decides to do it. It’s a global security issue,” said Carter. “In principle, it’s not so difficult technologically, so we’d like to really understand and to be able to predict what would happen if that happened.”
With any technology, stakeholder and community engagement is essential for deployment, Carter said. She emphasized the importance of both respectfully listening to concerns and thoroughly addressing them, stating, “Hopefully, there’s enough information given to assuage their fears. We have to gain the trust of people before any deployment can be considered.”
A crucial component of this trust starts with the responsibility of the scientific community to be transparent and critique each other’s work, Carter said. “Skepticism is good. You should have to prove your proof of principle.”
MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The series will continue in fall 2025. For more information on this and additional events, visit the MITEI website.
Inroads to personalized AI trip planning
Travel agents help to provide end-to-end logistics — like transportation, accommodations, meals, and lodging — for businesspeople, vacationers, and everyone in between. For those looking to make their own arrangements, large language models (LLMs) seem like they would be a strong tool to employ for this task because of their ability to iteratively interact using natural language, provide some commonsense reasoning, collect information, and call other tools in to help with the task at hand. However, recent work has found that state-of-the-art LLMs struggle with complex logistical and mathematical reasoning, as well as problems with multiple constraints, like trip planning, where they’ve been found to provide viable solutions 4 percent or less of the time, even with additional tools and application programming interfaces (APIs).
Subsequently, a research team from MIT and the MIT-IBM Watson AI Lab reframed the issue to see if they could increase the success rate of LLM solutions for complex problems. “We believe a lot of these planning problems are naturally a combinatorial optimization problem,” where you need to satisfy several constraints in a certifiable way, says Chuchu Fan, associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and the Laboratory for Information and Decision Systems (LIDS). She is also a researcher in the MIT-IBM Watson AI Lab. Her team applies machine learning, control theory, and formal methods to develop safe and verifiable control systems for robotics, autonomous systems, controllers, and human-machine interactions.
Noting the transferable nature of their work for travel planning, the group sought to create a user-friendly framework that can act as an AI travel broker to help develop realistic, logical, and complete travel plans. To achieve this, the researchers combined common LLMs with algorithms and a complete satisfiability solver. Solvers are mathematical tools that rigorously check if criteria can be met and how, but they require complex computer programming for use. This makes them natural companions to LLMs for problems like these, where users want help planning in a timely manner, without the need for programming knowledge or research into travel options. Further, if a user’s constraint cannot be met, the new technique can identify and articulate where the issue lies and propose alternative measures to the user, who can then choose to accept, reject, or modify them until a valid plan is formulated, if one exists.
“Different complexities of travel planning are something everyone will have to deal with at some point. There are different needs, requirements, constraints, and real-world information that you can collect,” says Fan. “Our idea is not to ask LLMs to propose a travel plan. Instead, an LLM here is acting as a translator to translate this natural language description of the problem into a problem that a solver can handle [and then provide that to the user],” says Fan.
Co-authoring a paper on the work with Fan are Yang Zhang of MIT-IBM Watson AI Lab, AeroAstro graduate student Yilun Hao, and graduate student Yongchao Chen of MIT LIDS and Harvard University. This work was recently presented at the Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.
Breaking down the solver
Math tends to be domain-specific. For example, in natural language processing, LLMs perform regressions to predict the next token, a.k.a. “word,” in a series to analyze or create a document. This works well for generalizing diverse human inputs. LLMs alone, however, wouldn’t work for formal verification applications, like in aerospace or cybersecurity, where circuit connections and constraint tasks need to be complete and proven, otherwise loopholes and vulnerabilities can sneak by and cause critical safety issues. Here, solvers excel, but they need fixed formatting inputs and struggle with unsatisfiable queries. A hybrid technique, however, provides an opportunity to develop solutions for complex problems, like trip planning, in a way that’s intuitive for everyday people.
“The solver is really the key here, because when we develop these algorithms, we know exactly how the problem is being solved as an optimization problem,” says Fan. Specifically, the research group used a solver called satisfiability modulo theories (SMT), which determines whether a formula can be satisfied. “With this particular solver, it’s not just doing optimization. It’s doing reasoning over a lot of different algorithms there to understand whether the planning problem is possible or not to solve. That’s a pretty significant thing in travel planning. It’s not a very traditional mathematical optimization problem because people come up with all these limitations, constraints, restrictions,” notes Fan.
Translation in action
The “travel agent” works in four steps that can be repeated, as needed. The researchers used GPT-4, Claude-3, or Mistral-Large as the method’s LLM. First, the LLM parses a user’s requested travel plan prompt into planning steps, noting preferences for budget, hotels, transportation, destinations, attractions, restaurants, and trip duration in days, as well as any other user prescriptions. Those steps are then converted into executable Python code (with a natural language annotation for each of the constraints), which calls APIs like CitySearch, FlightSearch, etc. to collect data, and the SMT solver to begin executing the steps laid out in the constraint satisfaction problem. If a sound and complete solution can be found, the solver outputs the result to the LLM, which then provides a coherent itinerary to the user.
If one or more constraints cannot be met, the framework begins looking for an alternative. The solver outputs code identifying the conflicting constraints (with its corresponding annotation) that the LLM then provides to the user with a potential remedy. The user can then decide how to proceed, until a solution (or the maximum number of iterations) is reached.
Generalizable and robust planning
The researchers tested their method using the aforementioned LLMs against other baselines: GPT-4 by itself, OpenAI o1-preview by itself, GPT-4 with a tool to collect information, and a search algorithm that optimizes for total cost. Using the TravelPlanner dataset, which includes data for viable plans, the team looked at multiple performance metrics: how frequently a method could deliver a solution, if the solution satisfied commonsense criteria like not visiting two cities in one day, the method’s ability to meet one or more constraints, and a final pass rate indicating that it could meet all constraints. The new technique generally achieved over a 90 percent pass rate, compared to 10 percent or lower for the baselines. The team also explored the addition of a JSON representation within the query step, which further made it easier for the method to provide solutions with 84.4-98.9 percent pass rates.
The MIT-IBM team posed additional challenges for their method. They looked at how important each component of their solution was — such as removing human feedback or the solver — and how that affected plan adjustments to unsatisfiable queries within 10 or 20 iterations using a new dataset they created called UnsatChristmas, which includes unseen constraints, and a modified version of TravelPlanner. On average, the MIT-IBM group’s framework achieved 78.6 and 85 percent success, which rises to 81.6 and 91.7 percent with additional plan modification rounds. The researchers analyzed how well it handled new, unseen constraints and paraphrased query-step and step-code prompts. In both cases, it performed very well, especially with an 86.7 percent pass rate for the paraphrasing trial.
Lastly, the MIT-IBM researchers applied their framework to other domains with tasks like block picking, task allocation, the traveling salesman problem, and warehouse. Here, the method must select numbered, colored blocks and maximize its score; optimize robot task assignment for different scenarios; plan trips minimizing distance traveled; and robot task completion and optimization.
“I think this is a very strong and innovative framework that can save a lot of time for humans, and also, it’s a very novel combination of the LLM and the solver,” says Hao.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
Melding data, systems, and society
Research that crosses the traditional boundaries of academic disciplines, and boundaries between academia, industry, and government, is increasingly widespread, and has sometimes led to the spawning of significant new disciplines. But Munther Dahleh, a professor of electrical engineering and computer science at MIT, says that such multidisciplinary and interdisciplinary work often suffers from a number of shortcomings and handicaps compared to more traditionally focused disciplinary work.
But increasingly, he says, the profound challenges that face us in the modern world — including climate change, biodiversity loss, how to control and regulate artificial intelligence systems, and the identification and control of pandemics — require such meshing of expertise from very different areas, including engineering, policy, economics, and data analysis. That realization is what guided him, a decade ago, in the creation of MIT’s pioneering Institute for Data, Systems and Society (IDSS), aiming to foster a more deeply integrated and lasting set of collaborations than the usual temporary and ad hoc associations that occur for such work.
Dahleh has now written a book detailing the process of analyzing the landscape of existing disciplinary divisions at MIT and conceiving of a way to create a structure aimed at breaking down some of those barriers in a lasting and meaningful way, in order to bring about this new institute. The book, “Data, Systems, and Society: Harnessing AI for Societal Good,” was published this March by Cambridge University Press.
The book, Dahleh says, is his attempt “to describe our thinking that led us to the vision of the institute. What was the driving vision behind it?” It is aimed at a number of different audiences, he says, but in particular, “I’m targeting students who are coming to do research that they want to address societal challenges of different types, but utilizing AI and data science. How should they be thinking about these problems?”
A key concept that has guided the structure of the institute is something he refers to as “the triangle.” This refers to the interaction of three components: physical systems, people interacting with those physical systems, and then regulation and policy regarding those systems. Each of these affects, and is affected by, the others in various ways, he explains. “You get a complex interaction among these three components, and then there is data on all these pieces. Data is sort of like a circle that sits in the middle of this triangle and connects all these pieces,” he says.
When tackling any big, complex problem, he suggests, it is useful to think in terms of this triangle. “If you’re tackling a societal problem, it’s very important to understand the impact of your solution on society, on the people, and the role of people in the success of your system,” he says. Often, he says, “solutions and technology have actually marginalized certain groups of people and have ignored them. So the big message is always to think about the interaction between these components as you think about how to solve problems.”
As a specific example, he cites the Covid-19 pandemic. That was a perfect example of a big societal problem, he says, and illustrates the three sides of the triangle: there’s the biology, which was little understood at first and was subject to intensive research efforts; there was the contagion effect, having to do with social behavior and interactions among people; and there was the decision-making by political leaders and institutions, in terms of shutting down schools and companies or requiring masks, and so on. “The complex problem we faced was the interaction of all these components happening in real-time, when the data wasn’t all available,” he says.
Making a decision, for example shutting schools or businesses, based on controlling the spread of the disease, had immediate effects on economics and social well-being and health and education, “so we had to weigh all these things back into the formula,” he says. “The triangle came alive for us during the pandemic.” As a result, IDSS “became a convening place, partly because of all the different aspects of the problem that we were interested in.”
Examples of such interactions abound, he says. Social media and e-commerce platforms are another case of “systems built for people, and they have a regulation aspect, and they fit into the same story if you’re trying to understand misinformation or the monitoring of misinformation.”
The book presents many examples of ethical issues in AI, stressing that they must be handled with great care. He cites self-driving cars as an example, where programming decisions in dangerous situations can appear ethical but lead to negative economic and humanitarian outcomes. For instance, while most Americans support the idea that a car should sacrifice its driver rather than kill an innocent person, they wouldn’t buy such a car. This reluctance lowers adoption rates and ultimately increases casualties.
In the book, he explains the difference, as he sees it, between the concept of “transdisciplinary” versus typical cross-disciplinary or interdisciplinary research. “They all have different roles, and they have been successful in different ways,” he says. The key is that most such efforts tend to be transitory, and that can limit their societal impact. The fact is that even if people from different departments work together on projects, they lack a structure of shared journals, conferences, common spaces and infrastructure, and a sense of community. Creating an academic entity in the form of IDSS that explicitly crosses these boundaries in a fixed and lasting way was an attempt to address that lack. “It was primarily about creating a culture for people to think about all these components at the same time.”
He hastens to add that of course such interactions were already happening at MIT, “but we didn’t have one place where all the students are all interacting with all of these principles at the same time.” In the IDSS doctoral program, for instance, there are 12 required core courses — half of them from statistics and optimization theory and computation, and half from the social sciences and humanities.
Dahleh stepped down from the leadership of IDSS two years ago to return to teaching and to continue his research. But as he reflected on the work of that institute and his role in bringing it into being, he realized that unlike his own academic research, in which every step along the way is carefully documented in published papers, “I haven’t left a trail” to document the creation of the institute and the thinking behind it. “Nobody knows what we thought about, how we thought about it, how we built it.” Now, with this book, they do.
The book, he says, is “kind of leading people into how all of this came together, in hindsight. I want to have people read this and sort of understand it from a historical perspective, how something like this happened, and I did my best to make it as understandable and simple as I could.”
How we really judge AI
Suppose you were shown that an artificial intelligence tool offers accurate predictions about some stocks you own. How would you feel about using it? Now, suppose you are applying for a job at a company where the HR department uses an AI system to screen resumes. Would you be comfortable with that?
A new study finds that people are neither entirely enthusiastic nor totally averse to AI. Rather than falling into camps of techno-optimists and Luddites, people are discerning about the practical upshot of using AI, case by case.
“We propose that AI appreciation occurs when AI is perceived as being more capable than humans and personalization is perceived as being unnecessary in a given decision context,” says MIT Professor Jackson Lu, co-author of a newly published paper detailing the study’s results. “AI aversion occurs when either of these conditions is not met, and AI appreciation occurs only when both conditions are satisfied.”
The paper, “AI Aversion or Appreciation? A Capability–Personalization Framework and a Meta-Analytic Review,” appears in Psychological Bulletin. The paper has eight co-authors, including Lu, who is the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management.
New framework adds insight
People’s reactions to AI have long been subject to extensive debate, often producing seemingly disparate findings. An influential 2015 paper on “algorithm aversion” found that people are less forgiving of AI-generated errors than of human errors, whereas a widely noted 2019 paper on “algorithm appreciation” found that people preferred advice from AI, compared to advice from humans.
To reconcile these mixed findings, Lu and his co-authors conducted a meta-analysis of 163 prior studies that compared people’s preferences for AI versus humans. The researchers tested whether the data supported their proposed “Capability–Personalization Framework” — the idea that in a given context, both the perceived capability of AI and the perceived necessity for personalization shape our preferences for either AI or humans.
Across the 163 studies, the research team analyzed over 82,000 reactions to 93 distinct “decision contexts” — for instance, whether or not participants would feel comfortable with AI being used in cancer diagnoses. The analysis confirmed that the Capability–Personalization Framework indeed helps account for people’s preferences.
“The meta-analysis supported our theoretical framework,” Lu says. “Both dimensions are important: Individuals evaluate whether or not AI is more capable than people at a given task, and whether the task calls for personalization. People will prefer AI only if they think the AI is more capable than humans and the task is nonpersonal.”
He adds: “The key idea here is that high perceived capability alone does not guarantee AI appreciation. Personalization matters too.”
For example, people tend to favor AI when it comes to detecting fraud or sorting large datasets — areas where AI’s abilities exceed those of humans in speed and scale, and personalization is not required. But they are more resistant to AI in contexts like therapy, job interviews, or medical diagnoses, where they feel a human is better able to recognize their unique circumstances.
“People have a fundamental desire to see themselves as unique and distinct from other people,” Lu says. “AI is often viewed as impersonal and operating in a rote manner. Even if the AI is trained on a wealth of data, people feel AI can’t grasp their personal situations. They want a human recruiter, a human doctor who can see them as distinct from other people.”
Context also matters: From tangibility to unemployment
The study also uncovered other factors that influence individuals’ preferences for AI. For instance, AI appreciation is more pronounced for tangible robots than for intangible algorithms.
Economic context also matters. In countries with lower unemployment, AI appreciation is more pronounced.
“It makes intuitive sense,” Lu says. “If you worry about being replaced by AI, you’re less likely to embrace it.”
Lu is continuing to examine people’s complex and evolving attitudes toward AI. While he does not view the current meta-analysis as the last word on the matter, he hopes the Capability–Personalization Framework offers a valuable lens for understanding how people evaluate AI across different contexts.
“We’re not claiming perceived capability and personalization are the only two dimensions that matter, but according to our meta-analysis, these two dimensions capture much of what shapes people’s preferences for AI versus humans across a wide range of studies,” Lu concludes.
In addition to Lu, the paper’s co-authors are Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao of Sun Yat-sen University; Xiang Zhou of Shenzhen University; and Dongyuan Wu of Fudan University.
The research was supported, in part, by grants to Qin and Wu from the National Natural Science Foundation of China.
“Each of us holds a piece of the solution”
MIT has an unparalleled history of bringing together interdisciplinary teams to solve pressing problems — think of the development of radar during World War II, or leading the international coalition that cracked the code of the human genome — but the challenge of climate change could demand a scale of collaboration unlike any that’s come before at MIT.
“Solving climate change is not just about new technologies or better models. It’s about forging new partnerships across campus and beyond — between scientists and economists, between architects and data scientists, between policymakers and physicists, between anthropologists and engineers, and more,” MIT Vice President for Energy and Climate Evelyn Wang told an energetic crowd of faculty, students, and staff on May 6. “Each of us holds a piece of the solution — but only together can we see the whole.”
Undeterred by heavy rain, approximately 300 campus community members filled the atrium in the Tina and Hamid Moghadam Building (Building 55) for a spring gathering hosted by Wang and the Climate Project at MIT. The initiative seeks to direct the full strength of MIT to address climate change, which Wang described as one of the defining challenges of this moment in history — and one of its greatest opportunities.
“It calls on us to rethink how we power our world, how we build, how we live — and how we work together,” Wang said. “And there is no better place than MIT to lead this kind of bold, integrated effort. Our culture of curiosity, rigor, and relentless experimentation makes us uniquely suited to cross boundaries — to break down silos and build something new.”
The Climate Project is organized around six missions, thematic areas in which MIT aims to make significant impact, ranging from decarbonizing industry to new policy approaches to designing resilient cities. The faculty leaders of these missions posed challenges to the crowd before circulating among the crowd to share their perspectives and to discuss community questions and ideas.
Wang and the Climate Project team were joined by a number of research groups, startups, and MIT offices conducting relevant work today on issues related to energy and climate. For example, the MIT Office of Sustainability showcased efforts to use the MIT campus as a living laboratory; MIT spinouts such as Forma Systems, which is developing high-performance, low-carbon building systems, and Addis Energy, which envisions using the earth as a reactor to produce clean ammonia, presented their technologies; and visitors learned about current projects in MIT labs, including DebunkBot, an artificial intelligence-powered chatbot that can persuade people to shift their attitudes about conspiracies, developed by David Rand, the Erwin H. Schell Professor at the MIT Sloan School of Management.
Benedetto Marelli, an associate professor in the Department of Civil and Environmental Engineering who leads the Wild Cards Mission, said the energy and enthusiasm that filled the room was inspiring — but that the individual conversations were equally valuable.
“I was especially pleased to see so many students come out. I also spoke with other faculty, talked to staff from across the Institute, and met representatives of external companies interested in collaborating with MIT,” Marelli said. “You could see connections being made all around the room, which is exactly what we need as we build momentum for the Climate Project.”
White House looks to freeze more agency funds — and expand executive power
New Jersey offshore wind project bows out
Meet Trump’s energy pitchman
Trump backs off plan linking disaster aid, immigration
Trump team mum on report targeting state climate action
Insurers reap $20B profit despite LA wildfires
Drought, rising prices, dwindling herds hit North African holiday
Brazil’s Amazon forest sees May setback as climate talks near
Israel intercepts Greta Thunberg’s Gaza aid ship
How a portable shelter could help cool India’s outdoor workers
35 Years for Your Freedom Online
Once upon a time we were promised flying cars and jetpacks. Yet we've arrived at a more complicated timeline where rights advocates can find themselves defending our hard-earned freedoms more often than shooting for the moon. In tough times, it's important to remember that your vision for the future can be just as valuable as the work you do now.
Thirty-five years ago, a small group of folks saw the coming digital future and banded together to ensure that technology would empower people, not oppress them—and EFF was born. While the dangers of corporate and state forces grew alongside the internet, EFF and supporters like you faithfully rose to the occasion. Will you help celebrate EFF’s 35th anniversary and donate in support of digital freedom?
Protect Online Privacy & Free Expression
Together we’ve won many fights for encryption, free speech, innovation, and privacy online. Yet it’s plain to see that we must keep advocating for technology users whether that’s in the courts, before lawmakers, educating the public, or creating privacy-enhancing tools. EFF members make it possible—you can lend a hand and get some great perks!
Summer Swag Is HereWe love making stuff for EFF’s members each year. It’s our way of saying thanks for supporting the mission for your rights online, and I hope it’s your way of starting a conversation about internet freedom with people in your life.
shirts-both-necklines-wider-square-750px.jpgCelebrate EFF's 35th Anniversary in the digital rights movement with this EFF35 Cityscape member t-shirt by Hugh D’Andrade! EFF has a not-so-secret weapon that keeps us in the fight even when the odds are against us: we never lose sight of our vision for a better future. Choose a roomy Classic Fit Crewneck or a soft Slim Fit V-Neck.
hoodie-front-back-alt-square-750px.jpgAnd enjoy Lovelace-Klimtian vibes on EFF’s new Motherboard Hooded Sweatshirt by Shirin Mori. Gold details and orange poppies pop on lush forest green. Don't lose the forest for the trees—keep fighting for a world where tech supports people irl.
Join the Sustaining Donor Challenge (it’s easy)You'll get a numbered EFF35 Challenge Coin when you become a monthly or annual Sustaining Donor by July 10. It’s that simple.
If you're already a Sustaining Donor—THANKS! You too can get an EFF 35th Anniversary Challenge Coin when you upgrade your donation. Just increase your monthly or annual gift and let us know by emailing upgrade@eff.org. Get started at eff.org/recurring or go to your PayPal account if you used one.
coin_cat_1200px.jpgSupport internet freedom with a no-fuss automated recurring donation! Over 30% of EFF members have joined as Sustaining Donors to defend digital rights (and get some great swag every year). Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.
With your help, EFF is here to stay.
Protect Online Privacy & Free Expression
Recommendations for producing knowledge syntheses to inform climate change assessments
Nature Climate Change, Published online: 10 June 2025; doi:10.1038/s41558-025-02354-6
Climate change assessment reports are increasing in complexity as the knowledge base grows exponentially. In this Perspective, the authors advocate, and provide recommendations, for knowledge synthesis to become more common as a way to better inform such assessments.NYC lets AI gamble with Child Welfare
The Markup revealed in its reporting last month that New York City’s Administration for Children’s Services (ACS) has been quietly deploying an algorithmic tool to categorize families as “high risk". Using a grab-bag of factors like neighborhood and mother’s age, this AI tool can put families under intensified scrutiny without proper justification and oversight.
ACS knocking on your door is a nightmare for any parent, with the risk that any mistakes can break up your family and have your children sent to the foster care system. Putting a family under such scrutiny shouldn’t be taken lightly and shouldn’t be a testing ground for automated decision-making by the government.
This “AI” tool, developed internally by ACS’s Office of Research Analytics, scores families for “risk” using 279 variables and subjects those deemed highest-risk to intensified scrutiny. The lack of transparency, accountability, or due process protections demonstrates that ACS has learned nothing from the failures of similar products in the realm of child services.
The algorithm operates in complete secrecy and the harms from this opaque “AI theater” are not theoretical. The 279 variables are derived only from cases back in 2013 and 2014 where children were seriously harmed. However, it is unclear how many cases were analyzed, what, if any, kind of auditing and testing was conducted, and whether including of data from other years would have altered the scoring.
What we do know is disturbing: Black families in NYC face ACS investigations at seven times the rate of white families and ACS staff has admitted that the agency is more punitive towards Black families, with parents and advocates calling its practices “predatory.” It is likely that the algorithm effectively automates and amplifies this discrimination.
Despite the disturbing lack of transparency and accountability, ACS’s usage of this system has subjected families that this system ranks as “highest risk” to additional scrutiny, including possible home visits, calls to teachers and family, or consultations with outside experts. But those families, their attorneys, and even caseworkers don't know when and why the system flags a case, making it difficult to challenge the circumstances or process that leads to this intensified scrutiny.
This is not the only incidence in which usage of AI tools in the child services system has encountered issues with systemic biases. Back in 2022, the Associated Press reported that Carnegie Mellon researchers found that from August 2016 to May 2018, Allegheny County in Pennsylvania used an algorithmic tool that flagged 32.5% of Black children for “mandatory” investigation compared to just 20.8% of white, all while social workers disagreed with the algorithm's risk scores about one-third of the time.
The Allegheny system operates with the same toxic combination of secrecy and bias now plaguing NYC. Families and their attorneys can never know their algorithmic scores, making it impossible to challenge decisions that could destroy their lives. When a judge asked to see a family’s score in court, the county resisted, claiming it didn't want to influence legal proceedings with algorithmic numbers, which suggests that the scores are too unreliable for judicial scrutiny yet acceptable for targeting families.
Elsewhere these biased systems were successfully challenged. The developers of the Allegheny tool had already had their product rejected in New Zealand, where researchers correctly identified that the tool would likely result in more Māori families being tagged for investigation. Meanwhile, California spent $195,273 developing a similar tool before abandoning it in 2019 due in part to concerns about racial equity.
Governmental deployment of automated and algorithmic decision making not only perpetuates social inequalities, but removes mechanisms for accountability when agencies make mistakes. The state should not be using these tools for rights-determining decisions and any other uses must be subject to vigorous scrutiny and independent auditing to ensure the public’s trust in the government’s actions.
Universal nanosensor unlocks the secrets to plant growth
Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group within the Singapore-MIT Alliance for Research and Technology have developed the world’s first near-infrared fluorescent nanosensor capable of real-time, nondestructive, and species-agnostic detection of indole-3-acetic acid (IAA) — the primary bioactive auxin hormone that controls the way plants develop, grow, and respond to stress.
Auxins, particularly IAA, play a central role in regulating key plant processes such as cell division, elongation, root and shoot development, and response to environmental cues like light, heat, and drought. External factors like light affect how auxin moves within the plant, temperature influences how much is produced, and a lack of water can disrupt hormone balance. When plants cannot effectively regulate auxins, they may not grow well, adapt to changing conditions, or produce as much food.
Existing IAA detection methods, such as liquid chromatography, require taking plant samples from the plant — which harms or removes part of it. Conventional methods also measure the effects of IAA rather than detecting it directly, and cannot be used universally across different plant types. In addition, since IAA are small molecules that cannot be easily tracked in real time, biosensors that contain fluorescent proteins need to be inserted into the plant’s genome to measure auxin, making it emit a fluorescent signal for live imaging.
SMART’s newly developed nanosensor enables direct, real-time tracking of auxin levels in living plants with high precision. The sensor uses near infrared imaging to monitor IAA fluctuations non-invasively across tissues like leaves, roots, and cotyledons, and it is capable of bypassing chlorophyll interference to ensure highly reliable readings even in densely pigmented tissues. The technology does not require genetic modification and can be integrated with existing agricultural systems — offering a scalable precision tool to advance both crop optimization and fundamental plant physiology research.
By providing real-time, precise measurements of auxin, the sensor empowers farmers with earlier and more accurate insights into plant health. With these insights and comprehensive data, farmers can make smarter, data-driven decisions on irrigation, nutrient delivery, and pruning, tailored to the plant’s actual needs — ultimately improving crop growth, boosting stress resilience, and increasing yields.
“We need new technologies to address the problems of food insecurity and climate change worldwide. Auxin is a central growth signal within living plants, and this work gives us a way to tap it to give new information to farmers and researchers,” says Michael Strano, co-lead principal investigator at DiSTAP, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper. “The applications are many, including early detection of plant stress, allowing for timely interventions to safeguard crops. For urban and indoor farms, where light, water, and nutrients are already tightly controlled, this sensor can be a valuable tool in fine-tuning growth conditions with even greater precision to optimize yield and sustainability.”
The research team documented the nanosensor’s development in a paper titled, “A Near-Infrared Fluorescent Nanosensor for Direct and Real-Time Measurement of Indole-3-Acetic Acid in Plants,” published in the journal ACS Nano. The sensor comprises single-walled carbon nanotubes wrapped in a specially designed polymer, which enables it to detect IAA through changes in near infrared fluorescence intensity. Successfully tested across multiple species, including Arabidopsis, Nicotiana benthamiana, choy sum, and spinach, the nanosensor can map IAA responses under various environmental conditions such as shade, low light, and heat stress.
“This sensor builds on DiSTAP’s ongoing work in nanotechnology and the CoPhMoRe technique, which has already been used to develop other sensors that can detect important plant compounds such as gibberellins and hydrogen peroxide. By adapting this approach for IAA, we’re adding to our inventory of novel, precise, and nondestructive tools for monitoring plant health. Eventually, these sensors can be multiplexed, or combined, to monitor a spectrum of plant growth markers for more complete insights into plant physiology,” says Duc Thinh Khong, research scientist at DiSTAP and co-first author of the paper.
“This small but mighty nanosensor tackles a long-standing challenge in agriculture: the need for a universal, real-time, and noninvasive tool to monitor plant health across various species. Our collaborative achievement not only empowers researchers and farmers to optimize growth conditions and improve crop yield and resilience, but also advances our scientific understanding of hormone pathways and plant-environment interactions,” says In-Cheol Jang, senior principal investigator at TLL, principal investigator at DiSTAP, and co-corresponding author of the paper.
Looking ahead, the research team is looking to combine multiple sensing platforms to simultaneously detect IAA and its related metabolites to create a comprehensive hormone signaling profile, offering deeper insights into plant stress responses and enhancing precision agriculture. They are also working on using microneedles for highly localized, tissue-specific sensing, and collaborating with industrial urban farming partners to translate the technology into practical, field-ready solutions.
The research was carried out by SMART, and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise program.