Feed aggregator
EFF to Court: Chatbot Output Can Reflect Human Expression
When a technology can have a conversation with you, it’s natural to anthropomorphize that technology—to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people—including judges in cases about AI chatbots—to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.
In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in Garcia v. Character Technologies explaining how large language models work and the various kinds of protected speech at stake.
Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.
In addition, the right to receive speech in itself is protected—even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.
None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.
We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.
Read our brief below.
No Walled Gardens. No Gilded Cages.
Sometimes technology feels like a gilded cage, and you’re not the one holding the key. Most people can’t live off the grid, so how do we stop data brokers who track and exploit you for money? Tech companies that distort what you see and hear? Governments that restrict, censor, and intimidate? No one can do it alone, but EFF was built to protect your rights. With your support, we can take back control.
With 35 years of deep expertise and the support of our members, EFF is delivering bold action to solve the biggest problems facing tech users: suing the government for overstepping their bounds; empowering the people and lawmakers to help them hold the line; and creating free, public interest software tools, guides, and explainers to make the web better.
EFF members enable thousands of hours of our legal work, activism, investigation, and software development for the public good. Join us today.
No Walled Gardens. No Gilded Cages.Think about it: in the face of rising authoritarianism and invasive surveillance, where would we be without an encrypted web? Your security online depends on researchers, hackers, and creators who are willing to take privacy and free speech rights seriously. That's why EFF will eagerly protect the beating heart of that movement at this week's summer security conferences in Las Vegas. This renowned summit of computer hacking events—BSidesLV, Black Hat USA, and DEF CON—illustrate the key role a community can play in helping you break free of the trappings of technology and retake the reins.
For summer security week, EFF’s DEF CON 33 t-shirt design Beyond the Walled Garden by Hannah Diaz is your gift at the Gold Level membership. Look closer to discover this year’s puzzle challenge! Many thanks to our volunteer puzzlemasters jabberw0nky and Elegin for all their work.
defcon-shirt-frontback-wide.png
A Token of Appreciation: Become a recurring monthly or annual Sustaining Donor this week and you'll get a numbered EFF35 Challenge Coin. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.
Our team is on a relentless mission to protect your civil liberties and human rights wherever they meet tech, but it’s only possible with your help.
Break free of tech’s walled gardens.
Surveilling Your Children with AirTags
Skechers is making a line of kid’s shoes with a hidden compartment for an AirTag.
Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So
The UK is having a moment. In late July, new rules took effect that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce age checks to prevent children from accessing such content. Online services are also required to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.
During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from across civil society repeatedly flagged concerns about the impact of this law on both adults’ and children’s rights. Yet politicians in the UK pushed ahead and enacted one of the most contentious age verification mandates that we’ve seen.
The case of safety online is not solved through technology alone.
No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As we’ve been saying for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:
Age Verification Systems Lead to Less PrivacyMandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to confirm the ages of all their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website.
Once this information is shared to verify a user's age, there’s no way for people to know how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be linked to other unrelated web activity, creating an aggregated profile of the user that grows more valuable as each new data point is added.
As we argued extensively during the passage of the Online Safety Act, any attempt to protect children online should not include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to make millions. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.
This Isn’t Just About Safety—It’s CensorshipYoung people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are required to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it.
Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as footage of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—asking users for photo ID, and smaller websites closing down entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, Blocked.
We know that the scope for so-called “harmful content” is subjective and arbitrary, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often label LGBTQ+ content as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children.
Children deserve a more intentional and holistic approach to protecting their safety and privacy online.
People Do Not Want ThisUsers in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK. The BBC reported that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar spike in searches for VPNs was evident in January when Florida joined the ever growing list of U.S. states in implementing an age verification mandate on sites that host adult content, including pornography websites like Pornhub.
Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or a solution to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic.
Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an authoritarian control on accessing information in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a comprehensive guide on using VPNs and protecting digital privacy—a valuable resource for anyone looking to use these tools.
Alongside increased VPN usage, a petition calling for the repeal of the Online Safety Act recently hit more than 400,000 signatures. In its official response to the petition, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with respect, not disdain, and revisit the OSA.
Users Will Be Exposed to Amplified DiscriminationTo check users' ages, three types of systems are typically deployed: age verification, which requires a person to prove their age and identity; age assurance, whereby users are required to prove that they are of a certain age or age range, such as over 18; or age estimation, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through age assurance to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.
From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is susceptible to mistakes and misidentification. Just last year, a legal challenge was launched against the Met Police after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system.
For age assurance purposes, we know that the technology at best has an error range of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.
Not Everyone Has Access to an ID or Personal DeviceMany advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, millions of people don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left excluded from online speech and will lose access to much of the internet.
These are primarily lower-income or older people who are often already marginalized, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression.
The Way ForwardThe case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than weakening rights for already vulnerable communities online, politicians must acknowledge these shortcomings and explore less invasive approaches to protect all people from online harms. We encourage politicians in the UK to look into what is best, and not what is easy.
Killing EPA climate rule could backfire on industry
Walkout in Texas Legislature could hinder flood improvements
FEMA says in court filing it ‘has not ended’ disaster grant program
GHG booster to Zeldin: Beware ‘public health harms’ of regulations
Merkley assails US loan for Mozambique natural gas project
Confused politics fans flames of Southern Europe’s wildfires
Rescuers call off search for 11 presumed dead in Pakistan floods
Scotland slammed by 90 mph winds, disrupting festivals and travel
South Africa plans jail time, fines under new emission rules
EFF at the Las Vegas Security Conferences
It’s time for EFF’s annual journey to Las Vegas for the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. Our lawyers, activists, and technologists are always excited to support this community of security researchers and tinkerers—the folks who push computer security forward (and somehow survive the Vegas heat in their signature black hoodies).
As in past years, EFF attorneys will be on-site to assist speakers and attendees. If you have legal concerns about an upcoming talk or sensitive infosec research—during the Las Vegas conferences or anytime—don’t hesitate to reach out at info@eff.org. Share a brief summary of the issue, and we’ll do our best to connect you with the right resources. You can also learn more about our work supporting technologists on our Coders’ Rights Project page.
Be sure to swing by the expo areas at all three conferences to say hello to your friendly neighborhood EFF staffers! You’ll probably spot us in the halls, but we’d love for you to stop by our booths to catch up on our latest work, get on our action alerts list, or become an EFF member! For the whole week, we’ll have our limited-edition DEF CON 33 t-shirt on hand—I can’t wait to see them take over each conference!
EFF Staff PresentationsAsk EFF at BSides Las Vegas
At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression.
WHEN: Tuesday, August 5, 15:00
WHERE: Skytalks at the Tuscany Suites Hotel & Casino
Recording PCAPs from Stingrays With a $20 Hotspot
What if you could use Wireshark on the connection between your cellphone and the tower it's connected to? In this talk we present Rayhunter, a cell site simulator detector built on top of a cheap cellular hotspot.
WHEN: Friday, August 8, 13:30
WHERE: LVCC - L1 - EHW3 - Track 1
Rayhunter Build Clinic
Come out and build EFF's Rayhunter! ($10 materials fee as an EFF donation)
WHEN: Friday, August 8 at 14:30
WHERE: Hackers.Town Community Space
Protect Your Privacy Online and on the Streets with EFF Tools
The Electronic Frontier Foundation (EFF) has been protecting your rights to privacy, free expression, and security online for 35 years! One important way we push for these freedoms is through our free, open source tools. We’ll provide an overview of how these tools work, including Privacy Badger, Rayhunter, Certbot, and Surveillance-Self Defense, and how they can help keep you safe online and on the streets.
WHEN: Friday, August 8 at 17:00
WHERE: Community Stage
Rayhunter Internals
Rayhunter is an open source project from EFF to detect IMSI catchers. In this follow up to our main stage talk about the project we will take a deep dive into the internals of Rayhunter. We will talk about the architecture of the project, what we have gained by using Rust, porting to other devices, how to jailbreak new devices, the design of our detection heuristics, open source shenanigans, and how we analyze files sent to us.
WHEN: Saturday, August 9, at 12:00
WHERE: Hackers.Town Community Space
Ask EFF at DEF CON 33
We're excited to answer your burning questions on pressing digital rights issues! Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.
WHEN: Saturday, August 9, at 14:30
WHERE: LVCC - L1 - EHW3 - Track 4
The EFF Benefit Poker Tournament is back for DEF CON 33! Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Join us at the Planet Hollywood Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web.
WHEN: Friday, August 8, 2025 - 12:00-15:00
WHERE: Planet Hollywood Poker Room, 3667 Las Vegas Blvd South, Las Vegas, NV 89109
Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! Get the latest updates.
WHEN: Saturday, August 9, 10:00- 12:00
WHERE: DEF CON Contests Stage (Look for the Moustache Flag)
Join us for some tech trivia on Saturday, August 9 at 7:00 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear.
WHEN: Saturday, August 9, 19:00-22:00
WHERE: Contest Stage
Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall) to learn more about the latest in online rights, get on our action alert list, or donate to become an EFF member. We'll also have our limited-edition DEF CON 33 shirts available starting Monday at BSidesLV! These shirts have a puzzle incorporated into the design. Snag one online for yourself starting on Tuesday, August 5 if you're not in Vegas!
Support Security & Digital Innovation
Neural traits of pro-environmental behaviour
Nature Climate Change, Published online: 05 August 2025; doi:10.1038/s41558-025-02403-0
Neural traits of pro-environmental behaviourCyclones and economic growth
Nature Climate Change, Published online: 05 August 2025; doi:10.1038/s41558-025-02401-2
Cyclones and economic growthStable Arctic dense water formation
Nature Climate Change, Published online: 05 August 2025; doi:10.1038/s41558-025-02404-z
Stable Arctic dense water formationShifting work hours reduces labour loss
Nature Climate Change, Published online: 05 August 2025; doi:10.1038/s41558-025-02402-1
Shifting work hours reduces labour lossMaking the most of the Methods
Nature Climate Change, Published online: 05 August 2025; doi:10.1038/s41558-025-02406-x
Clear methods reporting is key for reliable and reproducible science and can also prevent an extended review process. We highlight Methods section requirements for a more efficient publication process.AI helps chemists develop tougher plastics
A new strategy for strengthening polymer materials could lead to more durable plastics and cut down on plastic waste, according to researchers at MIT and Duke University.
Using machine learning, the researchers identified crosslinker molecules that can be added to polymer materials, allowing them to withstand more force before tearing. These crosslinkers belong to a class of molecules known as mechanophores, which change their shape or other properties in response to mechanical force.
“These molecules can be useful for making polymers that would be stronger in response to force. You apply some stress to them, and rather than cracking or breaking, you instead see something that has higher resilience,” says Heather Kulik, the Lammot du Pont Professor of Chemical Engineering at MIT, who is also a professor of chemistry and the senior author of the study.
The crosslinkers that the researchers identified in this study are iron-containing compounds known as ferrocenes, which until now had not been broadly explored for their potential as mechanophores. Experimentally evaluating a single mechanophore can take weeks, but the researchers showed that they could use a machine-learning model to dramatically speed up this process.
MIT postdoc Ilia Kevlishvili is the lead author of the open-access paper, which appeared Friday in ACS Central Science. Other authors include Jafer Vakil, a Duke graduate student; David Kastner and Xiao Huang, both MIT graduate students; and Stephen Craig, a professor of chemistry at Duke.
The weakest link
Mechanophores are molecules that respond to force in unique ways, typically by changing their color, structure, or other properties. In the new study, the MIT and Duke team wanted to investigate whether they could be used to help make polymers more resilient to damage.
The new work builds on a 2023 study from Craig and Jeremiah Johnson, the A. Thomas Guertin Professor of Chemistry at MIT, and their colleagues. In that work, the researchers found that, surprisingly, incorporating weak crosslinkers into a polymer network can make the overall material stronger. When materials with these weak crosslinkers are stretched to the breaking point, any cracks propagating through the material try to avoid the stronger bonds and go through the weaker bonds instead. This means the crack has to break more bonds than it would if all of the bonds were the same strength.
To find new ways to exploit that phenomenon, Craig and Kulik joined forces to try to identify mechanophores that could be used as weak crosslinkers.
“We had this new mechanistic insight and opportunity, but it came with a big challenge: Of all possible compositions of matter, how do we zero in on the ones with the greatest potential?” Craig says. “Full credit to Heather and Ilia for both identifying this challenge and devising an approach to meet it.”
Discovering and characterizing mechanophores is a difficult task that requires either time-consuming experiments or computationally intense simulations of molecular interactions. Most of the known mechanophores are organic compounds, such as cyclobutane, which was used as a crosslinker in the 2023 study.
In the new study, the researchers wanted to focus on molecules known as ferrocenes, which are believed to hold potential as mechanophores. Ferrocenes are organometallic compounds that have an iron atom sandwiched between two carbon-containing rings. Those rings can have different chemical groups added to them, which alter their chemical and mechanical properties.
Many ferrocenes are used as pharmaceuticals or catalysts, and a handful are known to be good mechanophores, but most have not been evaluated for that use. Experimental tests on a single potential mechanophore can take several weeks, and computational simulations, while faster, still take a couple of days. Evaluating thousands of candidates using these strategies is a daunting task.
Realizing that a machine-learning approach could dramatically speed up the characterization of these molecules, the MIT and Duke team decided to use a neural network to identify ferrocenes that could be promising mechanophores.
They began with information from a database known as the Cambridge Structural Database, which contains the structures of 5,000 different ferrocenes that have already been synthesized.
“We knew that we didn’t have to worry about the question of synthesizability, at least from the perspective of the mechanophore itself. This allowed us to pick a really large space to explore with a lot of chemical diversity, that also would be synthetically realizable,” Kevlishvili says.
First, the researchers performed computational simulations for about 400 of these compounds, allowing them to calculate how much force is necessary to pull atoms apart within each molecule. For this application, they were looking for molecules that would break apart quickly, as these weak links could make polymer materials more resistant to tearing.
Then they used this data, along with information on the structure of each compound, to train a machine-learning model. This model was able to predict the force needed to activate the mechanophore, which in turn influences resistance to tearing, for the remaining 4,500 compounds in the database, plus an additional 7,000 compounds that are similar to those in the database but have some atoms rearranged.
The researchers discovered two main features that seemed likely to increase tear resistance. One was interactions between the chemical groups that are attached to the ferrocene rings. Additionally, the presence of large, bulky molecules attached to both rings of the ferrocene made the molecule more likely to break apart in response to applied forces.
While the first of these features was not surprising, the second trait was not something a chemist would have predicted beforehand, and could not have been detected without AI, the researchers say. “This was something truly surprising,” Kulik says.
Tougher plastics
Once the researchers identified about 100 promising candidates, Craig’s lab at Duke synthesized a polymer material incorporating one of them, known as m-TMS-Fc. Within the material, m-TMS-Fc acts as a crosslinker, connecting the polymer strands that make up polyacrylate, a type of plastic.
By applying force to each polymer until it tore, the researchers found that the weak m-TMS-Fc linker produced a strong, tear-resistant polymer. This polymer turned out to be about four times tougher than polymers made with standard ferrocene as the crosslinker.
“That really has big implications because if we think of all the plastics that we use and all the plastic waste accumulation, if you make materials tougher, that means their lifetime will be longer. They will be usable for a longer period of time, which could reduce plastic production in the long term,” Kevlishvili says.
The researchers now hope to use their machine-learning approach to identify mechanophores with other desirable properties, such as the ability to change color or become catalytically active in response to force. Such materials could be used as stress sensors or switchable catalysts, and they could also be useful for biomedical applications such as drug delivery.
In those studies, the researchers plan to focus on ferrocenes and other metal-containing mechanophores that have already been synthesized but whose properties are not fully understood.
“Transition metal mechanophores are relatively underexplored, and they’re probably a little bit more challenging to make,” Kulik says. “This computational workflow can be broadly used to enlarge the space of mechanophores that people have studied.”
The research was funded by the National Science Foundation Center for the Chemistry of Molecularly Optimized Networks (MONET).