Feed aggregator

Open Austin: Reimagining Civic Engagement and Digital Equity in Texas

EFF: Updates - Fri, 08/29/2025 - 7:08pm

The Electronic Frontier Alliance is growing and this year we’ve been honored to welcome Open Austin into the EFA. Open Austin began in 2009 as a meetup that successfully advocated for a city-run open data portal, and relaunched as a 501(c)3 in 2018 dedicated to reimagining civic engagement and digital equity by building volunteer open source projects for local social organizations.

As Central Texas’ oldest and largest grassroots civic tech organization, their work has provided hands-on training for over 1,500 members in the hard and soft skills needed to build digital society, not just scroll through it. Recently, I got the chance to speak with Liani Lye, Executive Director of Open Austin, about the organization, its work, and what lies ahead:

There’s so many exciting things happening with Open Austin. Can you tell us about your Civic Digital Lab and your Data Research Hub?

Open Austin's Civic Digital Lab reimagines civic engagement by training central Texans to build technology for the public good. We build freely, openly, and alongside a local community stakeholder to represent community needs. Our lab currently supports 5 products:

  • Data Research Hub: Answering residents' questions with detailed information about our city
  • Streamlining Austin Public Library’s “book a study room” UX and code
  • Mapping landlords and rental properties to support local tenant rights organizing
  • Promoting public transit by highlighting points of interest along bus routes
  • Creating an interactive exploration of police bodycam data

We’re actively scaling up our Data Research Hub, which started in January 2025 and was inspired by 9b Corp’s Neighborhood Explorer. Through community outreach, we gather residents’ questions about our region and connect the questions with Open Austin’s data analysts. Each answered question adds to a pool of knowledge that equips communities to address local issues. Crucially, the organizing team at EFF, through the EFA, have connected us to local organizations to generate these questions.

Can you discuss your new Civic Data Fellowship cohort and Communities of Civic Practice? 

Launched in 2024, Open Austin’s Civic Data Fellowship trains the next generation of technologically savvy community leaders by pairing aspiring women, people of color, and LGBTQ+ data analysts with mentors to explore Austin’s challenges. These culminate in data projects and talks to advocates and policymakers, which double as powerful portfolio pieces.  While we weren’t able to fully fund Fellow stipends through grants this year, thanks to the generosity of our supporters, we successfully raised 25% through grassroots efforts.

Along with our fellowship and lab, we host monthly Communities of Civic Practice peer-learning circles that build skills for employability and practical civic engagement. Recent sessions include a speaker on service design in healthcare, and co-creating a data visualization on broadband adoption presented to local government staff. Our in-person communities are a great way to learn and build local public interest tech without becoming a full-on Labs contributor.

For those in Austin and Central Texas that want to get involved in-person, how can they plug-in?

If you can only come to one event for the rest of the year, come to our Open Austin’s 2025 Year-End Celebration. Open Austin members plus our freshly graduated Civic Data Fellow cohort will give lightning talks to share how they’ve supported local social advocacy through open source software and open data work. Otherwise, come to a monthly remote volunteer orientation call. There, we'll share how to get involved in our in-person Communities of Civic Practice and our remote Civic Digital Labs (aka, building open source software).

Open Austin welcomes volunteers from all backgrounds, including those with skills in marketing, fundraising, communications, and operations - not just technologists. You can make a difference in various ways. Come to a remote volunteer orientation call to learn more. And, as always, donate. Running multiple open source projects for structured workforce development is expensive, and your contributions help sustain Open Austin's work in the community. Please visit our donation page for ways to give; thanks EFF!

Friday Squid Blogging: Catching Humboldt Squid

Schneier on Security - Fri, 08/29/2025 - 5:04pm

First-person account of someone accidentally catching several Humboldt squid on a fishing line. No photos, though.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Baggage Tag Scam

Schneier on Security - Fri, 08/29/2025 - 7:01am

I just heard about this:

There’s a travel scam warning going around the internet right now: You should keep your baggage tags on your bags until you get home, then shred them, because scammers are using luggage tags to file fraudulent claims for missing baggage with the airline.

First, the scam is possible. I had a bag destroyed by baggage handlers on a recent flight, and all the information I needed to file a claim was on my luggage tag. I have no idea if I will successfully get any money from the airline, or what form it will be in, or how it will be tied to my name, but at least the first step is possible...

Join Your Fellow Digital Rights Supporters for the EFF Awards on September 10!

EFF: Updates - Thu, 08/28/2025 - 6:57pm

For over 35 years, the Electronic Frontier Foundation has presented awards recognizing key leaders and organizations advancing innovation and championing digital rights. The EFF Awards celebrate the accomplishments of people working toward a better future for technology users, both in the public eye and behind the scenes.

EFF is pleased to welcome all members of the digital rights community, supporters, and friends to this annual award ceremony. Join us to celebrate this year's honorees with drinks, bytes, and excellent company.

 

EFF Award Ceremony
Wednesday, September 10th, 2025
6:00 PM to 10:00 PM Pacific
San Francisco Design Center Galleria
101 Henry Adams Street, San Francisco, CA

Register Now

General Admission: $55 | Current EFF Members: $45 | Students: $35

The celebration will include a strolling dinner and desserts, as well as a hosted bar with cocktails, mocktails, wine, beer, and non-alcoholic beverages! Vegan, vegetarian, and gluten-free food options will be available. We hope to see you in person, wearing either a signature EFF hoodie, or something formal if you're excited for the opportunity to dress up!

If you're not able to make it, we'll also be hosting a livestream of the event on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to YouTube and the Internet Archive after the livestream.

We are proud to present awards to this year's winners:JUST FUTURES LAW

EFF Award for Leading Immigration and Surveillance Litigation

ERIE MEYER

EFF Award for Protecting Americans' Data

SOFTWARE FREEDOM LAW CENTER, INDIA

EFF Award for Defending Digital Freedoms

 More About the 2025 EFF Award Winners

Just Futures Law

Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States.  It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.  

In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.

Erie Meyer

Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis. 

 

Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.” 

Software Freedom Law Center

Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology. 

Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India. 

Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.

Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit eff.org/thanks or contact tierney@eff.org for more information on corporate giving and sponsorships.

EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full Event Expectations.

Questions? Email us at events@eff.org.

 

Understanding shocks to welfare systems

MIT Latest News - Thu, 08/28/2025 - 4:00pm

In an unhappy coincidence, the Covid-19 pandemic and Angie Jo’s doctoral studies in political science both began in 2019. Paradoxically, this global catastrophe helped define her primary research thrust.

As countries reacted with unprecedented fiscal measures to protect their citizens from economic collapse, Jo MCP ’19 discerned striking patterns among these interventions: Nations typically seen as the least generous on social welfare were suddenly deploying the most dramatic emergency responses.

“I wanted to understand why countries like the U.S., which famously offer minimal state support, suddenly mobilize an enormous emergency response to a crisis — only to let it vanish after the crisis passes,” says Jo.

Driven by this interest, Jo launched into a comparative exploration of welfare states that forms the backbone of her doctoral research. Her work examines how different types of welfare regimes respond to collective crises, and whether these responses lead to lasting institutional reforms or merely temporary patches.

A mismatch in investments

Jo’s research focuses on a particular subset of advanced industrialized democracies — countries like the United States, United Kingdom, Canada, and Australia — that political economists classify as “liberal welfare regimes.” These nations stand in contrast to the “social democratic welfare regimes” exemplified by Scandinavian countries.

“In everyday times, citizens in countries like Denmark or Sweden are already well-protected by a deep and comprehensive welfare state,” Jo explains. “When something like Covid hits, these countries were largely able to use the social policy tools and administrative infrastructure they already had, such as subsidized childcare and short-time work schemes that prevent mass layoffs.”

Liberal welfare regimes, however, exhibit a different pattern. During normal periods, "government assistance is viewed by many as the last resort,” Jo observes. “It’s means-tested and minimal, and the responsibility to manage risk is put on the individual.”

Yet when Covid struck, these same governments “spent historically unprecedented amounts on emergency aid to citizens, including stimulus checks, expanded unemployment insurance, child tax credits, grants, and debt forbearance that might normally have faced backlash from many Americans as government ‘handouts.’”

This stark contrast — minimal investment in social safety nets during normal times followed by massive crisis spending — lies at the heart of Jo’s inquiry. “What struck me was the mismatch: The U.S. invests so little in social welfare at baseline, but when crisis hits, it can suddenly unleash massive aid — just not in ways that stick. So what happens when the next crisis comes?”

From architecture to political economy

Jo took a winding path to studying welfare states in crisis. Born in South Korea, she moved with her family to California at age 3 as her parents sought an American education for their children. After moving back to Korea for high school, she attended Harvard University, where she initially focused on art and architecture.

“I thought I’d be an artist,” Jo recalls, “but I always had many interests, and I was very aware of different countries and different political systems, because we were moving around a lot.”

While studying architecture at Harvard, Jo’s academic focus pivoted.

“I realized that most of the decisions around how things get built, whether it’s a building or a city or infrastructure, are made by the government or by powerful private actors,” she explains. “The architect is the artist’s hand that is commissioned to execute, but the decisions behind it, I realized, were what interested me more.”

After a year working in macroeconomics research at a hedge fund, Jo found herself drawn to questions in political economy. “While I didn’t find the zero-sum game of finance compelling, I really wanted to understand the interactions between markets and governments that lay behind the trades,” she says.

Jo decided to pursue a master’s degree in city planning at MIT, where she studied the political economy of master-planning new cities as a form of industrial policy in China and South Korea, before transitioning to the political science PhD program. Her research focus shifted dramatically when the Covid-19 pandemic struck.

“It was the first time I realized, wow, these wealthy Western democracies have serious problems, too,” Jo says. “They are not dealing well with this pandemic and the structural inequalities and the deep tensions that have always been part of some of these societies, but are being tested even further by the enormity of this shock.”

The costs of crisis response

One of Jo’s key insights challenges conventional wisdom about fiscal conservatism. The assumption that keeping government small saves money in the long run may be fundamentally flawed when considering crisis response.

“What I’m exploring in my research is the irony that the less you invest in a capable, effective and well-resourced government, the more that backfires when a crisis inevitably hits and you have to patch up the holes,” Jo argues. “You’re not saving money; you’re deferring the cost.”

This inefficiency becomes particularly apparent when examining how different countries deployed aid during Covid. Countries like Denmark, with robust data systems connecting health records, employment information, and family data, could target assistance with precision. The United States, by contrast, relied on blunter instruments.

“If your system isn’t built to deliver aid in normal times, it won’t suddenly work well under pressure,” Jo explains. “The U.S. had to invent entire programs from scratch overnight — and many were clumsy, inefficient, or regressive.”

There is also a political aspect to this constraint. “Not only do liberal welfare countries lack the infrastructure to address crises, they are often governed by powerful constituencies that do not want to build it — they deliberately choose to enact temporary benefits that are precisely designed to fade,” Jo argues. “This perpetuates a cycle where short-term compensations are employed from crisis to crisis, constraining the permanent expansion of the welfare state.”

Missed opportunities

Jo’s dissertation also examines whether crises provide opportunities for institutional reform. Her second paper focuses on the 2008 financial crisis in the United States, and the Hardest Hit Fund, a program that allocated federal money to state housing finance agencies to prevent foreclosures.

“I ask why, with hundreds of millions in federal aid and few strings attached, state agencies ultimately helped so few underwater homeowners shed unmanageable debt burdens,” Jo says. “The money and the mandate were there — the transformative capacity wasn’t.”

Some states used the funds to pursue ambitious policy interventions, such as restructuring mortgage debt to permanently reduce homeowners’ principal and interest burdens. However, most opted for temporary solutions like helping borrowers make up missed payments, while preserving their original contract. Partisan politics, financial interests, and status quo bias are most likely responsible for these varying state strategies, Jo believes.

She sees this as “another case of the choice that governments have between throwing money at the problem as a temporary Band-Aid solution, or using a crisis as an opportunity to pursue more ambitious, deeper reforms that help people more sustainably in the long run.”

The significance of crisis response research

For Jo, understanding how welfare states respond to crises is not just an academic exercise, but a matter of profound human consequence.

“When there’s an event like the financial crisis or Covid, the scale of suffering and the welfare gap that emerges is devastating,” Jo emphasizes. “I believe political science should be actively studying these rare episodes, rather than disregarding them as once-in-a-century anomalies.”

Her research carries implications for how we think about welfare state design and crisis preparedness. As Jo notes, the most vulnerable members of society — “people who are unbanked, undocumented, people who have low or no tax liability because they don’t make enough, immigrants or those who don’t speak English or don’t have access to the internet or are unhoused” — are often invisible to relief systems.

As Jo prepares for her career in academia, she is motivated to apply her political science training to address such failures. “We’re going to have more crises, whether pandemics, AI, climate disasters, or financial shocks,” Jo warns. “Finding better ways to cover those people is essential, and is not something that our current welfare state — or our politics — are designed to handle.”

MIT researchers develop AI tool to improve flu vaccine strain selection

MIT Latest News - Thu, 08/28/2025 - 11:50am

Every year, global health experts are faced with a high-stakes decision: Which influenza strains should go into the next seasonal vaccine? The choice must be made months in advance, long before flu season even begins, and it can often feel like a race against the clock. If the selected strains match those that circulate, the vaccine will likely be highly effective. But if the prediction is off, protection can drop significantly, leading to (potentially preventable) illness and strain on health care systems.

This challenge became even more familiar to scientists in the years during the Covid-19 pandemic. Think back to the time (and time and time again), when new variants emerged just as vaccines were being rolled out. Influenza behaves like a similar, rowdy cousin, mutating constantly and unpredictably. That makes it hard to stay ahead, and therefore harder to design vaccines that remain protective.

To reduce this uncertainty, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Abdul Latif Jameel Clinic for Machine Learning in Health set out to make vaccine selection more accurate and less reliant on guesswork. They created an AI system called VaxSeer, designed to predict dominant flu strains and identify the most protective vaccine candidates, months ahead of time. The tool uses deep learning models trained on decades of viral sequences and lab test results to simulate how the flu virus might evolve and how the vaccines will respond.

Traditional evolution models often analyze the effect of single amino acid mutations independently. “VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” explains Wenxian Shi, a PhD student in MIT’s Department of Electrical Engineering and Computer Science, researcher at CSAIL, and lead author of a new paper on the work. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

An open-access report on the study was published today in Nature Medicine.

The future of flu

VaxSeer has two core prediction engines: one that estimates how likely each viral strain is to spread (dominance), and another that estimates how effectively a vaccine will neutralize that strain (antigenicity). Together, they produce a predicted coverage score: a forward-looking measure of how well a given vaccine is likely to perform against future viruses.

The scale of the score could be from an infinite negative to 0. The closer the score to 0, the better the antigenic match of vaccine strains to the circulating viruses. (You can imagine it as the negative of some kind of “distance.”)

In a 10-year retrospective study, the researchers evaluated VaxSeer’s recommendations against those made by the World Health Organization (WHO) for two major flu subtypes: A/H3N2 and A/H1N1. For A/H3N2, VaxSeer’s choices outperformed the WHO’s in nine out of 10 seasons, based on retrospective empirical coverage scores (a surrogate metric of the vaccine effectiveness, calculated from the observed dominance from past seasons and experimental HI test results). The team used this to evaluate vaccine selections, as the effectiveness is only available for vaccines actually given to the population. 

For A/H1N1, it outperformed or matched the WHO in six out of 10 seasons. In one notable case, for the 2016 flu season, VaxSeer identified a strain that wasn’t chosen by the WHO until the following year. The model’s predictions also showed strong correlation with real-world vaccine effectiveness estimates, as reported by the CDC, Canada’s Sentinel Practitioner Surveillance Network, and Europe’s I-MOVE program. VaxSeer’s predicted coverage scores aligned closely with public health data on flu-related illnesses and medical visits prevented by vaccination.

So how exactly does VaxSeer make sense of all these data? Intuitively, the model first estimates how rapidly a viral strain spreads over time using a protein language model, and then determines its dominance by accounting for competition among different strains.

Once the model has calculated its insights, they’re plugged into a mathematical framework based on something called ordinary differential equations to simulate viral spread over time. For antigenicity, the system estimates how well a given vaccine strain will perform in a common lab test called the hemagglutination inhibition assay. This measures how effectively antibodies can inhibit the virus from binding to human red blood cells, which is a widely used proxy for antigenic match/antigenicity. 

Outpacing evolution

“By modeling how viruses evolve and how vaccines interact with them, AI tools like VaxSeer could help health officials make better, faster decisions — and stay one step ahead in the race between infection and immunity,” says Shi. 

VaxSeer currently focuses only on the flu virus’s HA (hemagglutinin) protein,the major antigen of influenza. Future versions could incorporate other proteins like NA (neuraminidase), and factors like immune history, manufacturing constraints, or dosage levels. Applying the system to other viruses would also require large, high-quality datasets that track both viral evolution and immune responses — data that aren’t always publicly available. The team, however is currently working on the methods that can predict viral evolution in low-data regimes building on relations between viral families

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” says Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, AI lead of Jameel Clinic, and CSAIL principal investigator. 

“This paper is impressive, but what excites me perhaps even more is the team’s ongoing work on predicting viral evolution in low-data settings,” says Assistant Professor Jon Stokes of the Department of Biochemistry and Biomedical Sciences at McMaster University in Hamilton, Ontario. “The implications go far beyond influenza. Imagine being able to anticipate how antibiotic-resistant bacteria or drug-resistant cancers might evolve, both of which can adapt rapidly. This kind of predictive modeling opens up a powerful new way of thinking about how diseases change, giving us the opportunity to stay one step ahead and design clinical interventions before escape becomes a major problem.”

Shi and Barzilay wrote the paper with MIT CSAIL postdoc Jeremy Wohlwend ’16, MEng ’17, PhD ’25 and recent CSAIL affiliate Menghua Wu ’19, MEng ’20, PhD ’25. Their work was supported, in part, by the U.S. Defense Threat Reduction Agency and MIT Jameel Clinic.

The UK May Be Dropping Its Backdoor Mandate

Schneier on Security - Thu, 08/28/2025 - 7:00am

The US Director of National Intelligence is reporting that the UK government is dropping its backdoor mandate against the Apple iPhone. For now, at least, assuming that Tulsi Gabbard is reporting this accurately.

New self-assembling material could be the key to recyclable EV batteries

MIT Latest News - Thu, 08/28/2025 - 5:00am

Today’s electric vehicle boom is tomorrow’s mountain of electronic waste. And while myriad efforts are underway to improve battery recycling, many EV batteries still end up in landfills.

A research team from MIT wants to help change that with a new kind of self-assembling battery material that quickly breaks apart when submerged in a simple organic liquid. In a new paper published in Nature Chemistry, the researchers showed the material can work as the electrolyte in a functioning, solid-state battery cell and then revert back to its original molecular components in minutes.

The approach offers an alternative to shredding the battery into a mixed, hard-to-recycle mass. Instead, because the electrolyte serves as the battery’s connecting layer, when the new material returns to its original molecular form, the entire battery disassembles to accelerate the recycling process.

“So far in the battery industry, we’ve focused on high-performing materials and designs, and only later tried to figure out how to recycle batteries made with complex structures and hard-to-recycle materials,” says the paper’s first author Yukio Cho PhD ’23. “Our approach is to start with easily recyclable materials and figure out how to make them battery-compatible. Designing batteries for recyclability from the beginning is a new approach.”

Joining Cho on the paper are PhD candidate Cole Fincher, Ty Christoff-Tempesta PhD ’22, Kyocera Professor of Ceramics Yet-Ming Chiang, Visiting Associate Professor Julia Ortony, Xiaobing Zuo, and Guillaume Lamour.

Better batteries

There’s a scene in one of the “Harry Potter” films where Professor Dumbledore cleans a dilapidated home with the flick of the wrist and a spell. Cho says that image stuck with him as a kid. (What better way to clean your room?) When he saw a talk by Ortony on engineering molecules so that they could assemble into complex structures and then revert back to their original form, he wondered if it could be used to make battery recycling work like magic.

That would be a paradigm shift for the battery industry. Today, batteries require harsh chemicals, high heat, and complex processing to recycle. There are three main parts of a battery: the positively charged cathode, the negatively charged electrode, and the electrolyte that shuttles lithium ions between them. The electrolytes in most lithium-ion batteries are highly flammable and degrade over time into toxic byproducts that require specialized handling.

To simplify the recycling process, the researchers decided to make a more sustainable electrolyte. For that, they turned to a class of molecules that self-assemble in water, named aramid amphiphiles (AAs), whose chemical structures and stability mimic that of Kevlar. The researchers further designed the AAs to contain polyethylene glycol (PEG), which can conduct lithium ions, on one end of each molecule. When the molecules are exposed to water, they spontaneously form nanoribbons with ion-conducting PEG surfaces and bases that imitate the robustness of Kevlar through tight hydrogen bonding. The result is a mechanically stable nanoribbon structure that conducts ions across its surface.

“The material is composed of two parts,” Cho explains. “The first part is this flexible chain that gives us a nest, or host, for lithium ions to jump around. The second part is this strong organic material component that is used in the Kevlar, which is a bulletproof material. Those make the whole structure stable.”

When added to water, the nanoribbons self-assemble to form millions of nanoribbons that can be hot-pressed into a solid-state material.

“Within five minutes of being added to water, the solution becomes gel-like, indicating there are so many nanofibers formed in the liquid that they start to entangle each other,” Cho says. “What’s exciting is we can make this material at scale because of the self-assembly behavior.”

The team tested the material’s strength and toughness, finding it could endure the stresses associated with making and running the battery. They also constructed a solid-state battery cell that used lithium iron phosphate for the cathode and lithium titanium oxide as the anode, both common materials in today’s batteries. The nanoribbons moved lithium ions successfully between the electrodes, but a side-effect known as polarization limited the movement of lithium ions into the battery’s electrodes during fast bouts of charging and discharging, hampering its performance compared to today’s gold-standard commercial batteries.

“The lithium ions moved along the nanofiber all right, but getting the lithium ion from the nanofibers to the metal oxide seems to be the most sluggish point of the process,” Cho says.

When they immersed the battery cell into organic solvents, the material immediately dissolved, with each part of the battery falling away for easier recycling. Cho compared the materials’ reaction to cotton candy being submerged in water.

“The electrolyte holds the two battery electrodes together and provides the lithium-ion pathways,” Cho says. “So, when you want to recycle the battery, the entire electrolyte layer can fall off naturally and you can recycle the electrodes separately.”

Validating a new approach

Cho says the material is a proof of concept that demonstrates the recycle-first approach.

“We don’t want to say we solved all the problems with this material,” Cho says. “Our battery performance was not fantastic because we used only this material as the entire electrolyte for the paper, but what we’re picturing is using this material as one layer in the battery electrolyte. It doesn’t have to be the entire electrolyte to kick off the recycling process.”

Cho also sees a lot of room for optimizing the material’s performance with further experiments.

Now, the researchers are exploring ways to integrate these kinds of materials into existing battery designs as well as implementing the ideas into new battery chemistries.

“It’s very challenging to convince existing vendors to do something very differently,” Cho says. “But with new battery materials that may come out in five or 10 years, it could be easier to integrate this into new designs in the beginning.”

Cho also believes the approach could help reshore lithium supplies by reusing materials from batteries that are already in the U.S.

“People are starting to realize how important this is,” Cho says. “If we can start to recycle lithium-ion batteries from battery waste at scale, it’ll have the same effect as opening lithium mines in the U.S. Also, each battery requires a certain amount of lithium, so extrapolating out the growth of electric vehicles, we need to reuse this material to avoid massive lithium price spikes.”

The work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.

Improving the IPCC–UNFCCC relationship for effective provision of policy-relevant science

Nature Climate Change - Thu, 08/28/2025 - 12:00am

Nature Climate Change, Published online: 28 August 2025; doi:10.1038/s41558-025-02412-z

IPCC assessments are of limited use to the UNFCCC policy process due to misalignment and lack of relevance, with the situation further exacerbated by the UNFCCC’s weak scientific uptake mechanisms. The interface between the IPCC and the UNFCCC urgently needs to be reformed to facilitate a more effective science–policy connection.

Current and future methane emissions from boreal-Arctic wetlands and lakes

Nature Climate Change - Thu, 08/28/2025 - 12:00am

Nature Climate Change, Published online: 28 August 2025; doi:10.1038/s41558-025-02413-y

How much methane will be emitted from the boreal-Arctic region under climate change is not well constrained. Here the authors show that accounting for distinct wetland and lake classes leads to lower estimates of current methane loss as some classes emit low amounts of methane.

Why countries trade with each other while fighting

MIT Latest News - Thu, 08/28/2025 - 12:00am

In World War II, Britain was fighting for its survival against German aerial bombardment. Yet Britain was importing dyes from Germany at the same time. This sounds curious, to put it mildly. How can two countries at war with each other also be trading goods?

Examples of this abound, actually. Britain also traded with its enemies for almost all of World War I. India and Pakistan conducted trade with each other during the First Kashmir War, from 1947 to 1949, and during the India-Pakistan War of 1965. Croatia and then-Yugoslavia traded with each other while fighting in 1992.

“States do in fact trade with their enemies during wars,” says MIT political scientist Mariya Grinberg. “There is a lot of variation in which products get traded, and in which wars, and there are differences in how long trade lasts into a war. But it does happen.”

Indeed, as Grinberg has found, state leaders tend to calculate whether trade can give them an advantage by boosting their own economies while not supplying their enemies with anything too useful in the near term.

“At its heart, wartime trade is all about the tradeoff between military benefits and economic costs,” Grinberg says. “Severing trade denies the enemy access to your products that could increase their military capabilities, but it also incurs a cost to you because you’re losing trade and neutral states could take over your long-term market share.” Therefore, many countries try trading with their wartime foes.

Grinberg explores this topic in a groundbreaking new book, the first one on the subject, “Trade in War: Economic Cooperation Across Enemy Lines,” published this month by Cornell University Press. It is also the first book by Grinberg, an assistant professor of political science at MIT.

Calculating time and utility

“Trade in War” has its roots in research Grinberg started as a doctoral student at the University of Chicago, where she noticed that wartime trade was a phenomenon not yet incorporated into theories of state behavior.

Grinberg wanted to learn about it comprehensively, so, as she quips, “I did what academics usually do: I went to the work of historians and said, ‘Historians, what have you got for me?’”

Modern wartime trading began during the Crimean War, which pitted Russia against France, Britain, the Ottoman Empire, and other allies. Before the war’s start in 1854, France had paid for many Russian goods that could not be shipped because ice in the Baltic Sea was late to thaw. To rescue its produce, France then persuaded Britain and Russia to adopt “neutral rights,” codified in the 1856 Declaration of Paris, which formalized the idea that goods in wartime could be shipped via neutral parties (sometimes acting as intermediaries for warring countries).

“This mental image that everyone has, that we don’t trade with our enemies during war, is actually an artifact of the world without any neutral rights,” Grinberg says. “Once we develop neutral rights, all bets are off, and now we have wartime trade.”

Overall, Grinberg’s systematic analysis of wartime trade shows that it needs to be understood on the level of particular goods. During wartime, states calculate how much it would hurt their own economies to stop trade of certain items; how useful specific products would be to enemies during war, and in what time frame; and how long a war is going to last.

“There are two conditions under which we can see wartime trade,” Grinberg says. “Trade is permitted when it does not help the enemy win the war, and it’s permitted when ending it would damage the state’s long-term economic security, beyond the current war.”

Therefore a state might export diamonds, knowing an adversary would need to resell such products over time to finance any military activities. Conversely, states will not trade products that can quickly convert into military use.

“The tradeoff is not the same for all products,” Grinberg says. “All products can be converted into something of military utility, but they vary in how long that takes. If I’m expecting to fight a short war, things that take a long time for my opponent to convert into military capabilities won’t help them win the current war, so they’re safer to trade.” Moreover, she adds, “States tend to prioritize maintaining their long-term economic stability, as long as the stakes don’t hit too close to home.”

This calculus helps explain some seemingly inexplicable wartime trade decisions. In 1917, three years into World War I, Germany started trading dyes to Britain. As it happens, dyes have military uses, for example as coatings for equipment. And World War I, infamously, was lasting far beyond initial expectations. But as of 1917, German planners thought the introduction of unrestricted submarine warfare would bring the war to a halt in their favor within a few months, so they approved the dye exports. That calculation was wrong, but it fits the framework Grinberg has developed.

States: Usually wrong about the length of wars

“Trade in War” has received praise from other scholars in the field. Michael Mastanduno of Dartmouth College has said the book “is a masterful contribution to our understanding of how states manage trade-offs across economics and security in foreign policy.”

For her part, Grinberg notes that her work holds multiple implications for international relations — one being that trade relationships do not prevent hostilities from unfolding, as some have theorized.

“We can’t expect even strong trade relations to deter a conflict,” Grinberg says. “On the other hand, when we learn our assumptions about the world are not necessarily correct, we can try to find different levers to deter war.”

Grinberg has also observed that states are not good, by any measure, at projecting how long they will be at war.

“States very infrequently get forecasts about the length of war right,” Grinberg says. That fact has formed the basis of a second, ongoing Grinberg book project.

“Now I’m studying why states go to war unprepared, why they think their wars are going to end quickly,” Grinberg says. “If people just read history, they will learn almost all of human history works against this assumption.”

At the same time, Grinberg thinks there is much more that scholars could learn specifically about trade and economic relations among warring countries — and hopes her book will spur additional work on the subject.

“I’m almost certain that I’ve only just begun to scratch the surface with this book,” she says. 

Locally produced proteins help mitochondria function

MIT Latest News - Wed, 08/27/2025 - 4:45pm

Our cells produce a variety of proteins, each with a specific role that, in many cases, means that they need to be in a particular part of the cell where that role is needed. One of the ways that cells ensure certain proteins end up in the right location at the right time is through localized translation, a process that ensures that proteins are made — or translated — close to where they will be needed. MIT professor of biology and Whitehead Institute for Biomedical Research member Jonathan Weissman and colleagues have studied localized translation in order to understand how it affects cell functions and allows cells to quickly respond to changing conditions.

Now, Weissman, who is also a Howard Hughes Medical Institute Investigator, and postdoc in his lab Jingchuan Luo have expanded our knowledge of localized translation at mitochondria, structures that generate energy for the cell. In an open-access paper published today in Cell, they share a new tool, LOCL-TL, for studying localized translation in close detail, and describe the discoveries it enabled about two classes of proteins that are locally translated at mitochondria.

The importance of localized translation at mitochondria relates to their unusual origin. Mitochondria were once bacteria that lived within our ancestors’ cells. Over time, the bacteria lost their autonomy and became part of the larger cells, which included migrating most of their genes into the larger cell’s genome in the nucleus. Cells evolved processes to ensure that proteins needed by mitochondria that are encoded in genes in the larger cell’s genome get transported to the mitochondria. Mitochondria retain a few genes in their own genome, so production of proteins from the mitochondrial genome and that of the larger cell’s genome must be coordinated to avoid mismatched production of mitochondrial parts. Localized translation may help cells to manage the interplay between mitochondrial and nuclear protein production — among other purposes.

How to detect local protein production

For a protein to be made, genetic code stored in DNA is read into RNA, and then the RNA is read or translated by a ribosome, a cellular machine that builds a protein according to the RNA code. Weissman’s lab previously developed a method to study localized translation by tagging ribosomes near a structure of interest, and then capturing the tagged ribosomes in action and observing the proteins they are making. This approach, called proximity-specific ribosome profiling, allows researchers to see what proteins are being made where in the cell. The challenge that Luo faced was how to tweak this method to capture only ribosomes at work near mitochondria.

Ribosomes work quickly, so a ribosome that gets tagged while making a protein at the mitochondria can move on to making other proteins elsewhere in the cell in a matter of minutes. The only way researchers can guarantee that the ribosomes they capture are still working on proteins made near the mitochondria is if the experiment happens very quickly.

Weissman and colleagues had previously solved this time sensitivity problem in yeast cells with a ribosome-tagging tool called BirA that is activated by the presence of the molecule biotin. BirA is fused to the cellular structure of interest, and tags ribosomes it can touch — but only once activated. Researchers keep the cell depleted of biotin until they are ready to capture the ribosomes, to limit the time when tagging occurs. However, this approach does not work with mitochondria in mammalian cells because they need biotin to function normally, so it cannot be depleted.

Luo and Weissman adapted the existing tool to respond to blue light instead of biotin. The new tool, LOV-BirA, is fused to the mitochondrion’s outer membrane. Cells are kept in the dark until the researchers are ready. Then they expose the cells to blue light, activating LOV-BirA to tag ribosomes. They give it a few minutes and then quickly extract the ribosomes. This approach proved very accurate at capturing only ribosomes working at mitochondria.

The researchers then used a method originally developed by the Weissman lab to extract the sections of RNA inside of the ribosomes. This allows them to see exactly how far along in the process of making a protein the ribosome is when captured, which can reveal whether the entire protein is made at the mitochondria, or whether it is partly produced elsewhere and only gets completed at the mitochondria.

“One advantage of our tool is the granularity it provides,” Luo says. “Being able to see what section of the protein is locally translated helps us understand more about how localized translation is regulated, which can then allow us to understand its dysregulation in disease and to control localized translation in future studies.”

Two protein groups are made at mitochondria

Using these approaches, the researchers found that about 20 percent of the genes needed in mitochondria that are located in the main cellular genome are locally translated at mitochondria. These proteins can be divided into two distinct groups with different evolutionary histories and mechanisms for localized translation.

One group consists of relatively long proteins, each containing more than 400 amino acids or protein building blocks. These proteins tend to be of bacterial origin — present in the ancestor of mitochondria — and they are locally translated in both mammalian and yeast cells, suggesting that their localized translation has been maintained through a long evolutionary history.

Like many mitochondrial proteins encoded in the nucleus, these proteins contain a mitochondrial targeting sequence (MTS), a ZIP code that tells the cell where to bring them. The researchers discovered that most proteins containing an MTS also contain a nearby inhibitory sequence that prevents transportation until they are done being made. This group of locally translated proteins lacks the inhibitory sequence, so they are brought to the mitochondria during their production.

Production of these longer proteins begins anywhere in the cell, and then after approximately the first 250 amino acids are made, they get transported to the mitochondria. While the rest of the protein gets made, it is simultaneously fed into a channel that brings it inside the mitochondrion. This ties up the channel for a long time, limiting import of other proteins, so cells can only afford to do this simultaneous production and import for select proteins. The researchers hypothesize that these bacterial-origin proteins are given priority as an ancient mechanism to ensure that they are accurately produced and placed within mitochondria.

The second locally translated group consists of short proteins, each less than 200 amino acids long. These proteins are more recently evolved, and correspondingly, the researchers found that the mechanism for their localized translation is not shared by yeast. Their mitochondrial recruitment happens at the RNA level. Two sequences within regulatory sections of each RNA molecule that do not encode the final protein instead code for the cell’s machinery to recruit the RNAs to the mitochondria.

The researchers searched for molecules that might be involved in this recruitment, and identified the RNA binding protein AKAP1, which exists at mitochondria. When they eliminated AKAP1, the short proteins were translated indiscriminately around the cell. This provided an opportunity to learn more about the effects of localized translation, by seeing what happens in its absence. When the short proteins were not locally translated, this led to the loss of various mitochondrial proteins, including those involved in oxidative phosphorylation, our cells’ main energy generation pathway.

In future research, Weissman and Luo will delve deeper into how localized translation affects mitochondrial function and dysfunction in disease. The researchers also intend to use LOCL-TL to study localized translation in other cellular processes, including in relation to embryonic development, neural plasticity, and disease.

“This approach should be broadly applicable to different cellular structures and cell types, providing many opportunities to understand how localized translation contributes to biological processes,” Weissman says. “We’re particularly interested in what we can learn about the roles it may play in diseases including neurodegeneration, cardiovascular diseases, and cancers.”

SHASS announces appointments of new program and section heads for 2025-26

MIT Latest News - Wed, 08/27/2025 - 4:30pm

The MIT School of Humanities, Arts, and Social Sciences announced leadership changes in three of its academic units for the 2025-26 academic year.

“We have an excellent cohort of leaders coming in,” says Agustín Rayo, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences. “I very much look forward to working with them and welcoming them into the school's leadership team.”

Sandy Alexandre will serve as head of MIT Literature. Alexandre is an associate professor of literature and served as co-head of the section in 2024-25. Her research spans the late 19th-century to present-day Black American literature and culture. Her first book, “The Properties of Violence: Claims to Ownership in Representations of Lynching,” uses the history of American lynching violence as a framework to understand matters concerning displacement, property ownership, and the American pastoral ideology in a literary context. Her work thoughtfully explores how literature envisions ecologies of people, places, and objects as recurring echoes of racial violence, resonating across the long arc of U.S. history. She earned a bachelor’s degree in English language and literature from Dartmouth College and a master’s and PhD in English from the University of Virginia.

Manduhai Buyandelger will serve as director of the Program in Women’s and Gender Studies. A professor of anthropology, Buyandelger’s research seeks to find solutions for achieving more-integrated (and less-violent) lives for humans and non-humans by examining the politics of multi-species care and exploitation, urbanization, and how diverse material and spiritual realities interact and shape the experiences of different beings. By examining urban multi-species coexistence in different places in Mongolia, the United States, Japan, and elsewhere, her study probes possibilities for co-cultivating an integrated multi-species existence. She is also developing an anthro-engineering project with the MIT Department of Nuclear Science and Engineering (NSE) to explore pathways to decarbonization in Mongolia by examining user-centric design and responding to political and cultural constraints on clean-energy issues. She offers a transdisciplinary course with NSE, 21A.S01 (Anthro-Engineering: Decarbonization at the Million Person Scale), in collaboration with her colleagues in Mongolia’s capital, Ulaanbaatar. She has written two books on religion, gender, and politics in post-socialist Mongolia: “Tragic Spirits: Shamanism, Gender, and Memory in Contemporary Mongolia” (University of Chicago Press, 2013) and “A Thousand Steps to the Parliament: Constructing Electable Women in Mongolia” (University of Chicago Press, 2022). Her essays have appeared in American Ethnologist, Journal of Royal Anthropological Association, Inner Asia, and Annual Review of Anthropology. She earned a BA in literature and linguistics and an MA in philology from the National University of Mongolia, and a PhD in social anthropology from Harvard University.

Eden Medina PhD ’05 will serve as head of the Program in Science, Technology, and Society. A professor of science, technology, and society, Medina studies the relationship of science, technology, and processes of political change in Latin America. She is the author of “Cybernetic Revolutionaries: Technology and Politics in Allende's Chile” (MIT Press, 2011), which won the 2012 Edelstein Prize for best book on the history of technology and the 2012 Computer History Museum Prize for best book on the history of computing. Her co-edited volume “Beyond Imported Magic: Essays on Science, Technology, and Society in Latin America” (MIT Press, 2014) received the Amsterdamska Award from the European Society for the Study of Science and Technology (2016). In addition to her writings, Medina co-curated the exhibition “How to Design a Revolution: The Chilean Road to Design,” which opened in 2023 at the Centro Cultural La Moneda in Santiago, Chile, and is currently on display at the design museum Disseny Hub in Barcelona, Spain. She holds a PhD in the history and social study of science and technology from MIT and a master’s degree in studies of law from Yale Law School. She worked as an electrical engineer prior to starting her graduate studies.

Fikile Brushett named director of MIT chemical engineering practice school

MIT Latest News - Wed, 08/27/2025 - 4:15pm

Fikile R. Brushett, a Ralph Landau Professor of Chemical Engineering Practice, was named director of MIT’s David H. Koch School of Chemical Engineering Practice, effective July 1. In this role, Brushett will lead one of MIT’s most innovative and distinctive educational programs.

Brushett joined the chemical engineering faculty in 2012 and has been a deeply engaged member of the department. An internationally recognized leader in the field of energy storage, his research advances the science and engineering of electrochemical technologies for a sustainable energy economy. He is particularly interested in the fundamental processes that define the performance, cost, and lifetime of present-day and next-generation electrochemical systems. In addition to his research, Brushett has served as a first-year undergraduate advisor, as a member of the department’s graduate admissions committee, and on MIT’s Committee on the Undergraduate Program.

“Fik’s scholarly excellence and broad service position him perfectly to take on this new challenge,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering (ChemE). “His role as practice school director reflects not only his technical expertise, but his deep commitment to preparing students for meaningful, impactful careers. I’m confident he will lead the practice school with the same spirit of excellence and innovation that has defined the program for generations.”

Brushett succeeds T. Alan Hatton, a Ralph Landau Professor of Chemical Engineering Practice Post-Tenure, who directed the practice school for 36 years. For many, Hatton’s name is synonymous with the program. When he became director in 1989, only a handful of major chemical companies hosted stations.

“I realized that focusing on one industry segment was not sustainable and did not reflect the breadth of a chemical engineering education,” Hatton recalls. “So I worked to modernize the experience for students and have it reflect the many ways chemical engineers practice in the modern world.”

Under Hatton’s leadership, the practice school expanded globally and across industries, providing students with opportunities to work on diverse technologies in a wide range of locations. He pioneered the model of recruiting new companies each year, allowing many more firms to participate while also spreading costs across a broader sponsor base. He also introduced an intensive, hands-on project management course at MIT during Independent Activities Period, which has become a valuable complement to students’ station work and future careers.

Value for students and industry

The practice school benefits not only students, but also the companies that host them. By embedding teams directly into manufacturing plants and R&D centers, businesses gain fresh perspectives on critical technical challenges, coupled with the analytical rigor of MIT-trained problem-solvers. Many sponsors report that projects completed by practice school students have yielded measurable cost savings, process improvements, and even new opportunities for product innovation.

For manufacturing industries, where efficiency, safety, and sustainability are paramount, the program provides actionable insights that help companies strengthen competitiveness and accelerate growth. The model creates a unique partnership: students gain true real-world training, while companies benefit from MIT expertise and the creativity of the next generation of chemical engineers.

A century of hands-on learning

Founded in 1916 by MIT chemical engineering alumnus Arthur D. Little and Professor William Walker, with funding from George Eastman of Eastman Kodak, the practice school was designed to add a practical dimension to chemical engineering education. The first five sites — all in the Northeast — focused on traditional chemical industries working on dyes, abrasives, solvents, and fuels.

Today, the program remains unique in higher education. Students consult with companies worldwide across fields ranging from food and pharmaceuticals to energy and finance, tackling some of industry’s toughest challenges. More than a hundred years after its founding, the practice school continues to embody MIT’s commitment to hands-on, problem-driven learning that transforms both students and the industries they serve.

The practice school experience is part of ChemE’s MSCEP and PhD/ScDCEP programs. After coursework for each program is completed, a student attends practice school stations at host company sites. A group of six to 10 students spends two months each at two stations; each station experience includes teams of two or three students working on a month-long project, where they will prepare formal talks, scope of work, and a final report for the host company. Recent stations include Evonik in Marl, Germany; AstraZeneca in Gaithersburg, Maryland; EGA in Dubai, UAE; AspenTech in Bedford, Massachusetts; and Shell Technology Center and Dimensional Energy in Houston, Texas.

New method could monitor corrosion and cracking in a nuclear reactor

MIT Latest News - Wed, 08/27/2025 - 3:30pm

MIT researchers have developed a technique that enables real-time, 3D monitoring of corrosion, cracking, and other material failure processes inside a nuclear reactor environment.

This could allow engineers and scientists to design safer nuclear reactors that also deliver higher performance for applications like electricity generation and naval vessel propulsion.

During their experiments, the researchers utilized extremely powerful X-rays to mimic the behavior of neutrons interacting with a material inside a nuclear reactor.

They found that adding a buffer layer of silicon dioxide between the material and its substrate, and keeping the material under the X-ray beam for a longer period of time, improves the stability of the sample. This allows for real-time monitoring of material failure processes.

By reconstructing 3D image data on the structure of a material as it fails, researchers could design more resilient materials that can better withstand the stress caused by irradiation inside a nuclear reactor.

“If we can improve materials for a nuclear reactor, it means we can extend the life of that reactor. It also means the materials will take longer to fail, so we can get more use out of a nuclear reactor than we do now. The technique we’ve demonstrated here allows to push the boundary in understanding how materials fail in real-time,” says Ericmoore Jossou, who has shared appointments in the Department of Nuclear Science and Engineering (NSE), where he is the John Clark Hardwick Professor, and the Department of Electrical Engineering and Computer Science (EECS), and the MIT Schwarzman College of Computing.

Jossou, senior author of a study on this technique, is joined on the paper by lead author David Simonne, an NSE postdoc; Riley Hultquist, a graduate student in NSE; Jiangtao Zhao, of the European Synchrotron; and Andrea Resta, of Synchrotron SOLEIL. The research was published Tuesday by the journal Scripta Materiala.

“Only with this technique can we measure strain with a nanoscale resolution during corrosion processes. Our goal is to bring such novel ideas to the nuclear science community while using synchrotrons both as an X-ray probe and radiation source,” adds Simonne.

Real-time imaging

Studying real-time failure of materials used in advanced nuclear reactors has long been a goal of Jossou’s research group.

Usually, researchers can only learn about such material failures after the fact, by removing the material from its environment and imaging it with a high-resolution instrument.

“We are interested in watching the process as it happens. If we can do that, we can follow the material from beginning to end and see when and how it fails. That helps us understand a material much better,” he says.

They simulate the process by firing an extremely focused X-ray beam at a sample to mimic the environment inside a nuclear reactor. The researchers must use a special type of high-intensity X-ray, which is only found in a handful of experimental facilities worldwide.

For these experiments they studied nickel, a material incorporated into alloys that are commonly used in advanced nuclear reactors. But before they could start the X-ray equipment, they had to prepare a sample.

To do this, the researchers used a process called solid state dewetting, which involves putting a thin film of the material onto a substrate and heating it to an extremely high temperature in a furnace until it transforms into single crystals.

“We thought making the samples was going to be a walk in the park, but it wasn’t,” Jossou says.

As the nickel heated up, it interacted with the silicon substrate and formed a new chemical compound, essentially derailing the entire experiment. After much trial-and-error, the researchers found that adding a thin layer of silicon dioxide between the nickel and substrate prevented this reaction.

But when crystals formed on top of the buffer layer, they were highly strained. This means the individual atoms had moved slightly to new positions, causing distortions in the crystal structure.

Phase retrieval algorithms can typically recover the 3D size and shape of a crystal in real-time, but if there is too much strain in the material, the algorithms will fail.

However, the team was surprised to find that keeping the X-ray beam trained on the sample for a longer period of time caused the strain to slowly relax, due to the silicon buffer layer. After a few extra minutes of X-rays, the sample was stable enough that they could utilize phase retrieval algorithms to accurately recover the 3D shape and size of the crystal.

“No one had been able to do that before. Now that we can make this crystal, we can image electrochemical processes like corrosion in real time, watching the crystal fail in 3D under conditions that are very similar to inside a nuclear reactor. This has far-reaching impacts,” he says.

They experimented with a different substrate, such as niobium doped strontium titanate, and found that only a silicon dioxide buffered silicon wafer created this unique effect.

An unexpected result

As they fine-tuned the experiment, the researchers discovered something else.

They could also use the X-ray beam to precisely control the amount of strain in the material, which could have implications for the development of microelectronics.

In the microelectronics community, engineers often introduce strain to deform a material’s crystal structure in a way that boosts its electrical or optical properties.

“With our technique, engineers can use X-rays to tune the strain in microelectronics while they are manufacturing them. While this was not our goal with these experiments, it is like getting two results for the price of one,” he adds.

In the future, the researchers want to apply this technique to more complex materials like steel and other metal alloys used in nuclear reactors and aerospace applications. They also want to see how changing the thickness of the silicon dioxide buffer layer impacts their ability to control the strain in a crystal sample.

“This discovery is significant for two reasons. First, it provides fundamental insight into how nanoscale materials respond to radiation — a question of growing importance for energy technologies, microelectronics, and quantum materials. Second, it highlights the critical role of the substrate in strain relaxation, showing that the supporting surface can determine whether particles retain or release strain when exposed to focused X-ray beams,” says Edwin Fohtung, an associate professor at the Rensselaer Polytechnic Institute, who was not involved with this work.

This work was funded, in part, by the MIT Faculty Startup Fund and the U.S. Department of Energy. The sample preparation was carried out, in part, at the MIT.nano facilities.

We Are Still Unable to Secure LLMs from Malicious Inputs

Schneier on Security - Wed, 08/27/2025 - 7:07am

Nice indirect prompt injection attack:

Bargury’s attack starts with a poisoned document, which is shared to a potential victim’s Google Drive. (Bargury says a victim could have also uploaded a compromised file to their own account.) It looks like an official document on company meeting policies. But inside the document, Bargury hid a 300-word malicious prompt that contains instructions for ChatGPT. The prompt is written in white text in a size-one font, something that a human is unlikely to see but a machine will still read.

In a proof of concept video of the attack...

Podcast Episode: Protecting Privacy in Your Brain

EFF: Updates - Wed, 08/27/2025 - 3:05am

The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, as the New York Times reported this month, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3955c653-7346-44d2-82e2-0238931bcfd9%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established The Neurorights Foundation, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind. 

In this episode you’ll learn about:

  • How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation
  • Why neurotechnology regulation needs to be grounded in international human rights
  • Navigating the complex differences between medical and consumer privacy laws
  • The risk that information collected by devices now on the market could be decoded into actual words within just a few years
  • Balancing beneficial innovation with the protection of people’s mental privacy 

Rafael Yuste is a professor of biological sciences and neuroscience, co-director of the Kavli Institute for Brain Science, and director of the NeuroTechnology Center at Columbia University. He led the group of researchers that first proposed the BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative launched in 2013 by the Obama Administration. 

Jared Genser is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and he is outside general counsel to The Neurorights Foundation, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

RAFAEL YUSTE: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.
10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at.
Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons.
So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?

CINDY COHN: That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?
And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.

JASON KELLEY: Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.

CINDY COHN: But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.

JASON KELLEY: We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.

CINDY COHN: We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.

RAFAEL YUSTE: I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.
So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work.
And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.

CINDY COHN: I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.

JARED GENSER: Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.
I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.
And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.
Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.
And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.
Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.

JASON KELLEY: So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?

RAFAEL YUSTE: So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI.
And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.
And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.
So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.
It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.
But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.
And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech.
And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory.
So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine.
This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.

JARED GENSER: Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use.
As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.
The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.
And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth.
And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.
Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.
So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text.
Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.
And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.
And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?
And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera.
But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level.
Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.

CINDY COHN: Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.
And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?

JARED GENSER: Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.

CINDY COHN: And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.

JARED GENSER: Absolutely.

JASON KELLEY: Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.
Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.
We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:
[WHO BROKE THE INTERNET TRAILER]
And now back to our conversation with Rafael Yuste and Jared Genser.

CINDY COHN: This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.

JARED GENSER: No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.
You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth.
And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.
Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights.
And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.
And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right?
And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right?
So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.

JASON KELLEY: Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard.
But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time.
And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson.
What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.

RAFAEL YUSTE: Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.
All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.
But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.
So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea.
So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement.

JARED GENSER: What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.
We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth.
I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.
The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it.
And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.
And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right?
And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.

CINDY COHN: I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?

JARED GENSER: Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right?
I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?
The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it.
And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?
And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting.
Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right?
And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen.
And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes.
And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.
So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?
So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.

JASON KELLEY: I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.

CINDY COHN: Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.

JASON KELLEY: A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.
Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.

CINDY COHN: Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly.
And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.

Tropical deforestation is associated with considerable heat-related mortality

Nature Climate Change - Wed, 08/27/2025 - 12:00am

Nature Climate Change, Published online: 27 August 2025; doi:10.1038/s41558-025-02411-0

The authors assess the impacts of tropical deforestation and its subsequent local warming on human heat-related mortality. They estimate that deforestation-related warming (+0.27 °C) is associated with approximately 28,000 heat-related deaths per year.

Professor Emeritus Rainer Weiss, influential physicist who forged new paths to understanding the universe, dies at 92

MIT Latest News - Tue, 08/26/2025 - 6:50pm

MIT Professor Emeritus Rainer Weiss ’55, PhD ’62, a renowned experimental physicist and Nobel laureate whose groundbreaking work confirmed a longstanding prediction about the nature of the universe, passed away on Aug. 25. He was 92.

Weiss conceived of the Laser Interferometer Gravitational-Wave Observatory (LIGO) for detecting ripples in space-time known as gravitational waves, and was later a leader of the team that built LIGO and achieved the first-ever detection of gravitational waves. He shared the Nobel Prize in Physics for this work in 2017. Together with international collaborators, he and his colleagues at LIGO would go on to detect many more of these cosmic reverberations, opening up a new way for scientists to view the universe.

During his remarkable career, Weiss also developed a more precise atomic clock and figured out how to measure the spectrum of the cosmic microwave background via a weather balloon. He later co-founded and advanced the NASA Cosmic Background Explorer project, whose measurements helped support the Big Bang theory describing the expansion of the universe.

“Rai leaves an indelible mark on science and a gaping hole in our lives,” says Nergis Mavalvala PhD ’97, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. As a doctoral student with Weiss in the 1990s, Mavalvala worked with him to build an early prototype of a gravitational-wave detector as part of her PhD thesis. “He will be so missed but has also gifted us a singular legacy. Every gravitational wave event we observe will remind us of him, and we will smile. I am indeed heartbroken, but also so grateful for having him in my life, and for the incredible gifts he has given us — of passion for science and discovery, but most of all to always put people first.” she says.

A member of the MIT physics faculty since 1964, Weiss was known as a committed mentor and teacher, as well as a dedicated researcher. 

“Rai’s ingenuity and insight as an experimentalist and a physicist were legendary,” says Deepto Chakrabarty, the William A. M. Burden Professor in Astrophysics and head of the Department of Physics. “His no-nonsense style and gruff manner belied a very close, supportive and collaborative relationship with his students, postdocs, and other mentees. Rai was a thoroughly MIT product.”

“Rai held a singular position in science: He was the creator of two fields — measurements of the cosmic microwave background and of gravitational waves. His students have gone on to lead both fields and carried Rai’s rigor and decency to both. He not only created a huge part of important science, he also populated them with people of the highest caliber and integrity,” says Peter Fisher, the Thomas A. Frank Professor of Physics and former head of the physics department.

Enabling a new era in astrophysics

LIGO is a system of two identical detectors located 1,865 miles apart. By sending finely tuned lasers back and forth through the detectors, scientists can detect perturbations caused by gravitational waves, whose existence was proposed by Albert Einstein. These discoveries illuminate ancient collisions and other events in the early universe, and have confirmed Einstein’s theory of general relativity. Today, the LIGO Scientific Collaboration involves hundreds of scientists at MIT, Caltech, and other universities, and with the Virgo and KAGRA observatories in Italy and Japan makes up the global LVK Collaboration — but five decades ago, the instrument concept was an MIT class exercise conceived by Weiss.

As he told MIT News in 2017, in generating the initial idea, Weiss wondered: “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”

To realize the audacious design, Weiss teamed up in 1976 with physicist Kip Thorne, who, based in part on conversations with Weiss, soon seeded the creation of a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of the University of Glasgow, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave. Barry Barish later joined the team at Caltech, helping to secure funding and bring the detectors to completion.

After receiving support from the National Science Foundation, LIGO broke ground in the mid-1990s, constructing interferometric detectors in Hanford, Washington, and in Livingston, Louisiana. 

Years later, when he shared the Nobel Prize with Thorne and Barish for his work on LIGO, Weiss noted that hundreds of colleagues had helped to push forward the search for gravitational waves.

“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said at an MIT press conference. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”

He continued: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence.”

“While I have always been amazed and guided by Rai’s ingenuity, integrity, and humility, I was most impressed by his breadth of vision and ability to move between worlds,” says Matthew Evans, the MathWorks Professor of Physics. “He could seamlessly shift from the smallest technical detail of an instrument to the global vision for a future observatory. In the last few years, as the idea for a next-generation gravitational-wave observatory grew, Rai would often be at my door, sharing ideas for how to move the project forward on all levels. These discussions ranged from quantum mechanics to global politics, and Rai’s insights and efforts have set the stage for the future.”

A lifelong fascination with hard problems

Weiss was born in 1932 in Berlin. The young family fled Nazi Germany to Prague and then emigrated to New York City, where Weiss grew up with a love for classical music and electronics, earning money by fixing radios.

He enrolled at MIT, then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in the former Building 20. There, Weiss met physicist Jerrold Zacharias, who encouraged him in finishing his undergraduate degree in 1955 and his PhD in 1962.

Weiss spent some time at Princeton University as a postdoc in the legendary group led by Robert Dicke, where he developed experiments to test gravity. He returned to MIT as an assistant professor in 1964, starting a new research group in the Research Laboratory of Electronics dedicated to research in cosmology and gravitation.

Weiss received numerous awards and honors in addition to the Nobel Prize, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He was a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. He also shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.

Weiss is survived by his wife, Rebecca; his daughter, Sarah, and her husband, Tony; his son, Benjamin, and his wife, Carla; and a grandson, Sam, and his wife, Constance. Details about a memorial are forthcoming.

This article may be updated.

Simpler models can outperform deep learning at climate prediction

MIT Latest News - Tue, 08/26/2025 - 9:00am

Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.

The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.

Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.

The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.

They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.

The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.

“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and director of the Center for Sustainability Science and Strategy.

Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.

Comparing emulators

Because the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.

Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.

But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.

The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.

Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.

“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.

Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.

They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.

Constructing a new evaluation

From there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.

“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.

Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.

“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.

Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.

“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.

Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.

The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.

This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.”

Pages