Feed aggregator

MIT engineers develop a magnetic transistor for more energy-efficient electronics

MIT Latest News - Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Act Now to Stop California’s Paternalistic and Privacy-Destroying Social Media Ban

EFF: Updates - Fri, 04/24/2026 - 7:11pm

California lawmakers are fast-tracking A.B. 1709—a sweeping bill that would ban anyone under 16 from using social media and force every user, regardless of age, to verify their identity before accessing social platforms.

That means that under this bill, all Californians would be required to submit highly sensitive government-issued ID or biometric information to private companies simply to participate in the modern public square. In the name of “safety,” this bill would destroy online anonymity, expose sensitive personal data to breach and abuse, and replace parental decision-making with state-mandated censorship.

A.B. 1709 has already passed out of the Assembly Privacy and Judiciary Committees with nearly unanimous support. Its next stop is the Assembly Appropriations Committee, followed by a floor vote—likely within the next week.

Take action

Tell Your Representative to OPPOSE A.B. 1709

California Is About to Set a Dangerous Precedent for Online Censorship

By banning access to social media platforms for young people under 16, California is emulating Australia, where early results show exactly what EFF and other critics predicted: overblocking by platforms, leaving youth without support and even adults barred from access; major spikes in VPN use and other workarounds ranging from clever to desperate; and smaller platforms shutting down rather than attempting costly compliance with these sweeping bills.

California should not be racing to replicate those failures. After all, when California leads—especially on tech—other states follow. There is no reason for California to lead the nation into an unconstitutional social media ban that destroys privacy and harms youth.

Take action

Tell Your Representative to OPPOSE A.B. 1709

What’s Wrong With A.B. 1709?

Just about everything.

A.B. 1709 weaponizes legitimate parental concerns by using them to hand over even more censorship and surveillance power to the government. Beneath its shiny “protect the children” rhetoric, this bill is misguided, unconstitutional, and deeply harmful to users of all ages.

A.B. 1709 Recklessly Violates Free Speech Rights

The First Amendment protects the right to speak and access information, regardless of age. But by imposing a blanket ban on social media access, A.B. 1709 would cut off lawful speech for millions of California teenagers, while also forcing all users (adults and kids alike) to verify their ages before speaking or accessing information on social media. This will immensely and unconstitutionally chill Californians’ exercise of their First Amendment.

These mandates ignore longstanding Supreme Court precedent that protects young people’s speech and consistently find these bans unconstitutional. Banning young people entirely from social media is an extreme measure that doesn’t match the actual risks of online engagement. California simply does not have a valid interest in overriding parents’ and young people’s rights to decide for themselves how to use social media.

After all, age-verification technology is far from perfect. A.B. 1709’s reliance on imperfect age-verification technology will disproportionately silence marginalized communities—those whose IDs don’t match their presentation, those with disabilities, trans and gender non-conforming folks, and people of color—who are most likely to be wrongfully denied access by discriminatory systems.  

Finally, many people will simply refuse to give up their anonymity in order to access social media. Our right to anonymity has been a cornerstone of free expression since the founding of this country, and a pillar of online safety since the dawn of the internet. This is for good reason: it allows creativity, innovation, and political thought to flourish, and is essential for those who risk retaliation for their speech or associations. A.B. 1709 threatens to destroy it.

AB 1709 Needlessly Jeopardizes Everyone’s Privacy

A.B. 1709’s age verification mandate also creates massive security risks by forcing users to hand over immutable biometric data and government IDs to third-party vendors. By creating centralized "honeypots" of sensitive information, the bill invites identity theft and permanent surveillance rather than actual safety. If we don’t trust tech companies with our private information now, we shouldn't pass a law that mandates we give them even more of it. 

We’ve already seen repeated data breaches involving age- and identity-verification services. Yet A.B. 1709 would require millions more Californians—including the youth this bill claims to protect—to feed their most sensitive data into this growing surveillance ecosystem. 

This is not the answer to online safety.

Take action

Tell Your Representative to OPPOSE A.B. 1709

AB 1709 Harms the Youth It Claims to Protect

While framed as a safety measure, this bill serves as a blunt instrument of censorship, severing vital lifelines for California’s young people. Besides being unconstitutional, banning young people from the internet is bad public policy. After all, social media sites are not just sources of entertainment; they provide crucial spaces for young people to explore their identities—whether by creating and sharing art, practicing religion, building community, or engaging in civic life. 

Social science indicates that moderate internet use is a net positive for teens’ development, and negative outcomes are usually due to either lack of access or excessive use. Social media provides essential spaces for civic engagement, identity exploration, and community building—particularly for LGBTQ+ and marginalized youth who may lack support in their physical environments. By replacing access to political news and health resources with state-mandated isolation, A.B. 1709 ignores the calls of young people themselves who favor digital literacy and education over restrictive government control.

Young people have been loud and clear that what they want is access and education—not censorship and control. They even drafted their own digital literacy education bill, A.B. 2071, which is currently before the California legislature! Instead of cutting off vital lifelines, we should support education measures that would arm them (and the adults in their lives) with the knowledge they need to explore online spaces safely.

AB 1709 Is Misguided and Won’t Work

In case you needed more reasons to oppose this bill.

  • A.B. 1709 Replaces Parenting With Government Control. Families know there is no one-size-fits-all solution to parenting. But AB 1709 imposes one anyway, overriding parental decision-making with a blanket censorship prohibition. Parents who want to actively guide their children’s online experiences should be empowered, not relegated to the sidelines by a blunt state mandate.
  • A.B. 1709 Strengthens Big Tech Instead of Challenging It. Supporters claim that this bill will rein in the major tech companies, but in fact, steep fines and costly compliance regimes disproportionately harm smaller platforms. Where large corporations can afford to absorb legal risk and shell out for expensive verification systems, smaller forums and emerging platforms cannot. We’ve already seen platforms shut down or geoblock entire states in response to age-gating laws. And when the small platforms shutter, where do all of those users—and their valuable data—go? Straight back to the biggest companies.
  • A.B. 1709 Creates Expensive and Shady Bureaucracy During a Budget Crisis. California is facing a massive deficit, but A.B. 1709 would waste taxpayer dollars to fund a shadowy new "e-Safety Advisory Commission" to enforce this ban and dream up new ways to censor the internet. In addition, lawmakers in support of A.B. 1709 have already admitted that this bill is likely to follow the same path as other recent "child safety" laws that were struck down or blocked in court for First Amendment and privacy reasons. With A.B. 1709, taxpayers are being asked to hand over a blank check for millions in legal fees to defend a law that is unconstitutional on its face.
Californians: Act Now to Kill This Bill

A.B. 1709 is not an inevitability, as some supporters want you to believe. But we need to act now to support our youth and their right to participate in online public life.

Your representatives could vote on A.B. 1709 as soon as next week. If you’re a Californian, email your legislators now and tell them to vote NO on AB 1709.

Take action

Tell Your Representative to OPPOSE A.B. 1709

EFF Challenges Secrecy In Eastern District of Texas Patent Case 

EFF: Updates - Fri, 04/24/2026 - 6:57pm

Clinic students Emily Ko and Zoe Lee at the Technology Law and Policy Clinic at the NYU School of Law were the principal authors of this post.

Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. But too often, courts forget that and allow for massive over-sealing, especially in patent cases. 

EFF recently discovered another case of this in the Eastern District of Texas, where key court filings about Wi-Fi technology used by billions of people every day were hidden entirely from public view. The public could not see the parties’ arguments about patent ownership, the plaintiff’s standing in court, or licensing obligations tied to standardized technologies.

EFF Seeks to Uncover Sealed Information in Wilus 

The case Wilus Institute of Standards and Technology Inc. v. HP Inc., highlights a recurring transparency problem in patent litigation. 

Wilus claims to own standard essential patents (SEPs) related to Wi-Fi 6 — technology embedded in everyday devices. Wilus sued Samsung and HP for patent infringement. HP argued that Wilus failed to offer licenses on Fair, Reasonable, and Non-Discriminatory (FRAND) terms, which are required to prevent SEP holders from exploiting their position, by blocking fair access to widely used technologies. 

In reviewing the docket, EFF found that many filings were improperly sealed under a lenient protective order without the required, specific justification needed in a proper motion to seal. Because there is a presumption of public access to court filings, litigants must file a motion to seal and demonstrate compelling reasons for secrecy. This typically requires a document-by-document and line-by-line justification. 

In the Eastern District of Texas, that standard is often not enforced. Instead, district judges allow litigants to hide information using boilerplate justification in a protective order without explaining why specific documents or specific parts in a document should be hidden. 

In Wilus, two sets of documents stood out. 

First, Samsung moved to dismiss the case, arguing Wilus may not have validly obtained the patents — raising doubts about whether they had standing to sue at all. Wilus’s opposition to that motion was filed completely under seal, with no redacted public version available at all. That briefing likely addresses the patent assignment agreements that underpin Wilus’s business model — information the public has an interest in, especially in cases involving non-practicing entities (NPEs) like Wilus. 

Second, filings related to HP’s supplemental briefing on FRAND obligations were also sealed in full, with no redacted versions available to the public. Whether Wilus is bound by FRAND has implications far beyond this case. Companies subject to FRAND must adhere to reasonable licensing terms, while those that are not can charge significantly higher licensing fees. 

In both instances, the public was shut out of arguments that bear directly on how essential technologies are licensed and controlled.

EFF Pushes For Public Access 

EFF raised these concerns with Wilus’s counsel and pressed for public access to the sealed records. Wilus ultimately agreed to file redacted versions of several documents now available as Document Numbers 387, 388, and 389

That result is progress, but it shouldn’t require outside intervention. Public versions of court filings should be the default, not something negotiated after outside pressure.

Even now, these newly filed redacted versions conceal significant portions of the parties’ arguments. The public still cannot fully see how this case about technologies that are used every day is being litigated. 

Why Public Access Matters 

Sealing court records is designed to be rare. To overcome the presumption of public access, litigants must show compelling reasons for secrecy. That’s because open courts are a distinguishing feature of American democracy. The public, journalists, and policymakers all have the right to observe proceedings and hold both government actors and private litigants accountable. 

Some filings do contain trade secrets or commercially sensitive information. But that doesn’t mean litigants should be able to hide information without explaining why. The Eastern District of Texas allows litigants to bypass the requirement to explain why.

EFF confronted this very same issue in its attempt to intervene in another Eastern District of Texas case, Entropic v. Charter. The same pattern appeared again in Wilus: instead of narrowly tailored redactions supported by specific reasoning, filings were withheld wholesale. 

Courts Must Enforce the Standard

Courts, not third parties, are responsible for protecting the public’s right of access. 

That means enforcing the “compelling reasons” standard, as a matter of course. Parties seeking to seal sensitive information should be required to justify each proposed redaction. The Eastern District of Texas’ current approach falls short. By allowing broad, unsupported sealing through expansive protective orders, it effectively treats judicial records as confidential by default. 

Heavy caseloads don’t change the rule. Administrative burden cannot override constitutional and common law rights. Judicial records are presumptively public. Courts, including the Eastern District of Texas, should enforce that presumption. 

Other Federal Courts Get It Right 

The Eastern District of Texas is an outlier. In the Northern District of California, judges routinely reject overbroad sealing requests. As Judge Chhabria’s Civil Standing Order explains: 

[M]otions to seal . . . are almost always without merit. . . . Federal courts are paid for by the public, and the public has the right to inspect court records, subject only to narrow exceptions. 

The filing party must make a specific showing explaining why each document that it seeks to seal may justifiably be sealed . . . Generic and vague references to “competitive harm” are almost always insufficient justification for sealing. 

This approach reflects the law: sealing must be narrowly tailored and specifically justified.

Court Transparency is Fundamental 

At first glance, secrecy in patent litigation may not seem alarming. But it signals a broader erosion of transparency. The widespread use of expansive protective orders in the Eastern District of Texas is a practice that risks spreading if courts do not enforce the law. 

These practices allow private parties to obscure information about disputes involving technologies that shape modern life. That undermines a core principle of a free society: transparency regarding the actions of powerful actors. 

Courts are not private forums for business disputes. They are public institutions, and their records belong to the public. 

So long as these practices continue, EFF will keep advocating for transparency and working to vindicate the public’s right to access court records.

Friday Squid Blogging: How Squid Survived Extinction Events

Schneier on Security - Fri, 04/24/2026 - 5:03pm

Science news:

Scientists have finally cracked a long-standing mystery about squid and cuttlefish evolution by analyzing newly sequenced genomes alongside global datasets. The research reveals that these bizarre, intelligent creatures likely originated deep in the ocean over 100 million years ago, surviving mass extinction events by retreating into oxygen-rich deep-sea refuges. For millions of years, their evolution barely changed—until a dramatic post-extinction boom sparked rapid diversification as they moved into new shallow-water habitats. ...

California Coastal Community Must Reject CBP's AI-Powered Surveillance Tower

EFF: Updates - Fri, 04/24/2026 - 4:04pm

Customs and Border Protection (CBP) is seeking permission from the California city of San Clemente to install an Anduril Industries surveillance tower on a cliff that would allow for constant monitoring of entire coastal neighborhoods. 

The proposed tower is Anduril's Sentry, part of the Autonomous Surveillance Tower (AST) program. While CBP says it will primarily monitor the coastline for boats carrying migrants, it will actually be installed 1.5 miles inland, overlooking the bulk of the 62,000-resident city. By CBP's own public statement, the system–which combines video, radar, and computer vision–is "constantly scanning" for movement and identifying and tracking objects an AI algorithm decides are of interest. Depending on the model–the photos provided by CBP indicate it is a long range maritime model–the camera could see as far as nine miles, which would cover the entire city and potentially see as far as neighboring Dana Point.

"The AST utilize advanced computer vision algorithms to autonomously detect, identify, and track items of interest (IoI) as they transit through the towers field of view," CBP writes in a privacy threshold analysis. "The system can determine if an IoI is a human, animal, or vehicle without operator intervention. The system then generates and transmits an alert to operators with the location and images of the IoI for adjudication and response." 

On April 28, local residents and Oakland Privacy, a privacy- and anti-surveillance-focused citizens’ coalition, are holding a town hall to inform the public about the dangers of this technology. We urge people to attend to better understand what's at stake. 

"The planned deployment of an Anduril tower along a heavily used Orange County coastline 75 miles from the border demonstrates that the militarization of the border region is rapidly moving northwards and across the entire state," writes Oakland Privacy. 

City officials raised concerns about resident privacy and proposed that a lease agreement include a prohibition on surveilling neighborhoods. CBP rejected that proposal, instead saying that they would configure the tower to "avoid" scanning residential neighborhoods, but the system would remain capable of tracking human beings in residential areas. According to the staff report: 

In response to privacy concerns, CBP has stated the system would be configured to avoid scanning residential areas that fall into the scan viewshed, focusing the system on the marine environment. CBP has maintained the purpose of the system is specifically maritime surveillance, and the system would be singularly focused on offshore activities. However, there may be an instance in which there is an active smuggling event, detected by the system at sea, in which the subsequent smuggling event traverses through the residential neighborhoods. In such a case, the system may continue to track and monitor. To restrict this functionality would be contrary to the spirit and intent of the deployment. Therefore, they cannot make such a contractual obligation.

The Anduril towers retain a variety of data, including images and more. 

The proposed Anduril surveillance tower. Source: City of San Clemente

"The AST capture and retain imagery which occurs in plan view of the tower sites and is stored as an individual event with a unique event identified allowing replay of the event for further investigation or dismissal based on activity occurring," according to the private threshold analysis.

The document indicates a potential 30-day retention period for imagery, but then contradicts itself to say that data will be held indefinitely to train algorithms: "AST will also be maintaining learning training data, these records should not be deleted." This means that taxpayers would be paying for the privilege of having their data turned into fuel for Anduril's product.

In 2020 CBP said it would work with National Archives and Records Administration (NARA) to develop a retention schedule for training data (i.e., a timeline for deletion). However, when EFF filed a Freedom of Information Act (FOIA) with NARA, the agency said there were no records of these discussions. Likewise, CBP has not provided records in response to the FOIA request EFF filed with them seeking the same records. 

Anduril Maritime Sentry in San Diego, where the border fence meets the ocean.

This would not be the first CBP tower placed along the coastline in California. EFF identified one in Del Mar, about 30 miles from the border, and another in San Diego County where the border fence meets the Pacific Ocean. CBP has also applied to place towers–although not necessarily the Anduril model–in or near several other coastal locations: Gaviota State Park, Refugio State Park, Vandenberg Air Force Base, Piedras Blancas and Point Vicente. The California coastline isn’t the only coastline dotted with surveillance towers. The Migrant Rights Network has also documented numerous Anduril towers along the southeast coast of England. Where the San Clemente tower would differ is that there is a substantial population between the tower and the beach, and because it's a 360-degree system, it can watch neighborhoods even further from the coast. 

However, this won't be the first time an Anduril tower has been placed next to a community. EFF has documented numerous Anduril towers in public parks along the Rio Grande in Laredo and Roma, Texas. In Mission, Texas, an Anduril tower was placed outside an RV park: the tower could not even see the border without capturing data from the community. Because AI can swivel the cameras 360 degrees, two churches were within the "viewshed" of that tower. 

Click here to view EFF's ongoing map of CBP surveillance towers.

Many border surveillance towers are placed on city or county property, requiring a lease to be approved by the local governing body–as is the case with San Clemente. In 2024, EFF and Imperial Valley Equity and Justice organized an effort to fight the renewal of a Border Patrol's lease for a tower next to a public park. The coalition lost narrowly after a recall election ousted two officials who were critical of the lease.

CBP is rapidly increasing the number of towers at the border and beyond, recently announcing the potential to install 1,500 more towers in the next few years–more than tripling what we've documented so far–at a cost of more than $400 million to the public for maintenance alone. This is despite more than 20 years of government reports that have documented how tower-based systems are ineffective and wasteful.

It's time to fight back. 

Hiding Bluetooth Trackers in Mail

Schneier on Security - Fri, 04/24/2026 - 7:01am

It was used to track a Dutch naval ship:

Dutch journalist Just Vervaart, working for regional media network Omroep Gelderland, followed the directions posted on the Dutch government website and mailed a postcard with a hidden tracker inside. Because of this, they were able to track the ship for about a day, watching it sail from Heraklion, Crete, before it turned towards Cyprus. While it only showed the location of that one vessel, knowing that it was part of a carrier strike group sailing in the Mediterranean could potentially put the entire fleet at risk...

The world is searching for oil. This summit is looking to get rid of it.

ClimateWire News - Fri, 04/24/2026 - 6:47am
The gathering in Colombia marks a breakaway effort to accelerate climate action after years of plodding progress under the United Nations.

US official: No hope for global carbon tax

ClimateWire News - Fri, 04/24/2026 - 6:46am
A U.S. negotiator warned other countries in a closed-door meeting that efforts to rein in climate pollution from ships has "no prospect" of success.

Judge weighs future of Massachusetts offshore wind farm

ClimateWire News - Fri, 04/24/2026 - 6:46am
The Trump admin wants to walk back a key permit for the project but asked the court for more time to analyze a separate order rebuking the government's anti-renewable policies.

NextEra sees boom in renewable orders as it plans gas build-out

ClimateWire News - Fri, 04/24/2026 - 6:45am
The energy giant is planning to build enormous amounts of power generation as data centers reshape the U.S. electric grid.

Cassidy attacks Letlow over trip to climate conference

ClimateWire News - Fri, 04/24/2026 - 6:45am
Republican Rep. Julia Letlow is running to unseat Sen. Bill Cassidy.

California Senate committee rejects climate liability, wildfire insurance proposals

ClimateWire News - Fri, 04/24/2026 - 6:44am
The state Senate Insurance Committee refused to advance two major bills that would have recouped rising property insurance costs from oil and gas companies and required insurance companies to cover homes in fire-prone areas under certain conditions.

Under fire in California governor’s race, Tom Steyer leans into climate

ClimateWire News - Fri, 04/24/2026 - 6:44am
The candidate is using his climate record to pitch himself as a progressive in a narrowing Democratic field.

Environmental groups raise concerns about amendments to California landfill rule

ClimateWire News - Fri, 04/24/2026 - 6:43am
State regulators passed the nation’s strictest rules for controlling methane leaks from landfills in November. Now, those may be changing.

How Microsoft spooked the global carbon removal market

ClimateWire News - Fri, 04/24/2026 - 6:43am
What the company decides to do with the projects matters because of its juggernaut status in bankrolling technologies for capturing carbon — meaning its withdrawal would send shock waves through the market.

Coffee giants will map world’s farms to fight deforestation

ClimateWire News - Fri, 04/24/2026 - 6:42am
The initiative will help the sector comply with an EU regulation designed to tackle the felling of trees associated with the bloc’s imports of key commodities.

New bridge helps cement African mountain kingdom as water lifeline

ClimateWire News - Fri, 04/24/2026 - 6:42am
The bridge is part of a network of constructions that will help the landlocked nation nearly double its water exports to South Africa to power one of Africa's biggest industrial and economic hubs.

Large coastal cities are losing sea–land breeze

Nature Climate Change - Fri, 04/24/2026 - 12:00am

Nature Climate Change, Published online: 24 April 2026; doi:10.1038/s41558-026-02619-8

High-resolution modelling incorporating sea surface temperature variability reveals that ocean warming has already reduced sea–land breeze days in most large coastal cities. Thus, ocean warming poses an overlooked threat to a natural climate regulator, with future emissions pathways determining whether the decline accelerates or slows.

EFF to 9th Circuit (Again): App Stores Shouldn’t Be Liable for Processing Payments for User Content

EFF: Updates - Thu, 04/23/2026 - 6:05pm

EFF filed an amicus brief for the second time in the U.S. Court of Appeals for the Ninth Circuit, arguing that allowing cases against the Apple, Google, and Facebook app stores to proceed could lead to greater censorship of users’ online speech.

Our brief argues that the app stores should not lose Section 230 immunity for hosting “social casino” apps just because they process payments for virtual chips within those apps. Otherwise, all platforms that facilitate financial transactions for online content—beyond app stores and the apps and games they distribute—would be forced to censor user content to mitigate their legal exposure.

Social casino apps are online games where users can buy virtual chips with real money but can’t ever cash out their winnings. The three cases against Apple, Google, and Facebook were brought by plaintiffs who spent large sums of money on virtual chips and even became addicted to these games. The plaintiffs argue that social casino apps violate various state gambling laws.

At issue on appeal is the part of Section 230 that provides immunity to online platforms when they are sued for harmful content created by others—in this case, the social casino apps that plaintiffs downloaded from the various app stores and the virtual chips they bought within the apps.

Section 230 is the foundational law that has, since 1996, created legal breathing room for internet intermediaries (and their users) to publish third-party content. Online speech is largely mediated by these private companies, allowing all of us to speak, access information, and engage in commerce online, without requiring that we have loads of money or technical skills.

The lower court hearing the case ruled that the companies do not have Section 230 immunity because they allow the social casino apps to use the platforms’ payment processing services for the in-app purchasing of virtual chips.

However, in our brief we urged the Ninth Circuit to reverse the district court and hold that Section 230 does apply to the app stores, even when they process payments for virtual chips within the social casino apps. The app stores would undeniably have Section 230 immunity if sued for simply hosting the allegedly illegal social casino apps in their respective stores. Congress made no distinction—and the court shouldn’t recognize one—between hosting third-party content and processing payments for the same third-party content. Both are editorial choices of the platforms that are protected by Section 230.

We also argued that a rule that exposes internet intermediaries to potential liability for facilitating a financial transaction related to unlawful user content would have huge implications beyond the app stores. All platforms that facilitate financial transactions for third-party content would be forced to censor any user speech that may in any way risk legal exposure for the platform. This would harm the open internet—the unique ability of anyone with an internet connection to communicate with others around the world cheaply, easily, and quickly.

The plaintiffs argue that the app stores could preserve their Section 230 immunity by simply refusing to process in-app purchases of virtual chips. But the plaintiffs’ position fails to recognize that other platforms don’t have such a choice. Etsy, for example, facilitates purchases of virtual art, while Patreon enables artists to be supported by memberships. Platforms like these would lose Section 230 immunity and be exposed to potential liability simply because they processed payments for user content that a plaintiff argues is illegal. That outcome would threaten the entire business models of these services, ultimately harming users’ ability to share and access online speech.

The app stores should be protected by Section 230—a law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on—irrespective of their role as payment processors.

Speaking Freely: Lizzie O'Shea

EFF: Updates - Thu, 04/23/2026 - 3:56pm

Lizzie O’Shea is an Australian lawyer, author, and the founder and chair of Digital Rights Watch, which advocates for freedom, fairness, and fundamental rights in the digital age. She sits on the board of Blueprint for Free Speech, and in 2019 was named a Human Rights Hero by Access Now.

Interviewer: Jillian York

Jillian York: Hi, good morning, or rather, good evening for you.

Lizzie O’Shea: Hi Jillian, it's great to be here. 

JY: I'm going to start with asking a question that I try to kick off every interview with, which is, what does free speech or free expression mean to you?

LO: Yes, so Digital Rights Watch, which is the organization I founded and I chair, is focused on fundamental rights and freedoms in the online world. And so freedom of speech is obviously a big part of that. It's obviously a very vexed right, partly because of its heritage and interpretation in places like the United States, which sometimes sits in contrast culturally to other parts of the world. Certainly, if you ask Australians about it, they do not want to have a culture of free speech that looks like the United States. 

Australians understand that freedom of expression is a really important component of democracy. So one of my jobs is to make the claim that curtailing freedom of speech, including in online settings, can have a real impact on democracy. And I think that's fundamentally true, and you don't want to wait until it's too late to be able to make that argument, to ensure that the policies are in place to protect that freedom. So I think it's a really important freedom. It's got a vexed history and expression in the modern online world, but many people still instinctively understand that those in power see speech as something that is important to challenging their authority, and so it can be a really important place to fight back and protect democracy and other rights from being impacted by those who hold power at the moment.

JY: I want to ask you about your book. You're a critic of techno-utopianism. Your book, Future Histories, came out right before the pandemic, if I recall, and it looks to the past for lessons for our technological and cultural future. I really appreciated your take on Elon Musk. So I guess what I want to ask you about is two things. What, in your view, has changed since you wrote it?

LO: Yeah, that's a really interesting question. I must admit, I was thinking about it the other day whether some of what I wrote really holds up. And I think the fundamentals are still true, in the sense that I still believe that a lot of the discussions and debates we have about technology today are presented as fundamentally novel when they are very old, ancient discussions and debates about how power should be distributed through society, and how technology enables that kind of power distribution or works against it, right? So I feel like that fundamental analysis, whatever contribution to the field, is still valid, of course. In some ways though, those technical systems have become more opaque, like the artificial intelligence industry and how that's been built off the back of years of exploitation of personal information and centralization of power in technology companies. Those things have become more powerful and concentrated and difficult to understand—if you're not deep in the weeds—beyond an instinctive understanding that something's going a bit wrong, perhaps. 

So in some ways those trends have exacerbated things in ways that I think many other contributors, yourself included, have brought a really important set of analyses to these discussions. More generally, though, one of my fundamental understandings of how I frame some of these arguments is that there are two sources of power, right? Government power and corporate power that really shape how the online world is developing. And post-pandemic, there's a lot greater skepticism, criticism, and outright distrust of government authorities seeking to do work to protect people from some of those corporate excesses. Now that's obviously something that is much more part of American culture as opposed to European culture, and in Australia, we sit somewhere in between. But that skepticism and that mistrust of institutions, I don't know that that serves us well. I'm somebody who does treat with criticism policies put forward by government, because I think it's our job as civil society people, as people part of a social movement that want to have rights at the center of our society, to be critical of those in power and make sure that they're being held accountable. But that mistrust has fundamentally shifted how possible it is to do that in an effective way. And I think that poses real challenges for people who want to see government policy look different to how it is and how you can bring people into a sense of trust, investing in a democratic rights based society, rather than rejection and cynicism being the overriding, overriding kind of factor in how they shape their political arguments. Which is a real challenge, I think, for people like us who rely on some of that mistrust and skepticism in order to fuel the fire of some of these campaigns, but do want to see people still invested in democratic processes.

JY: Yeah, absolutely. So speaking of policies, you're in Australia, where the government's enacted some of the strictest social media laws for minors in the world, I would say. In one of our most recent interviews, which was with Jacob Mchangama, we talked about how the comparison of social media to Big Tobacco is spreading, and this idea that there's no utility in social media for minors, that it's a net harm. I'm curious what your thoughts are on that, and then we can dive into the more nitty gritty bits of the Australian law.

LO: I think that's a great place to start, because the overwhelming sense in how this policy was presented to the public in Australia is that this is a very dangerous place for young people to be, and that desperate times call for desperate measures. “We don't have time to fix these spaces. We need to just restrict access.” It's described as a delay. Many, including me, describe it as a ban for under 16 year olds. So what has been very interesting in this discussion is who's been left out of the conversation. And if you talk to young people—and there are many organizations working with young people—and you talk to them about what they use social media for, they often say that they wish adults understood that they used it for different reasons, or they're scared about different things than what adults think they might be scared of. And so that kind of fundamental failure of communication, which I suppose is not a surprise, when these people don't actually have the power to vote, have the power to do things a normal legal person would do, is somewhat unsurprising.

But when you're making policy about these people, that can be quite impactful, it can have very detrimental impacts. And if you take a human rights approach, that is your job to think about the negative impact on human rights, and what you're going to do about it, it's not really good enough. And this has been an experiment that Australia has led on, very much, looking for headlines, for a perception of boldness. Some of that claim is legitimate in the sense that they want to be seen to be taking action, and a lot of people feel very concerned that governments aren't prepared to take action against big tech companies. So, some of that is a valid feeling. But I think in this context, we lose so much when we don't actually listen to the people affected, and listen to the myriad ways in which they use social media. Some things they're concerned about, some things they find harmful, some things they're really sick of. But there's so many ways in which they use it to find a sense of community, to find a sense of empowerment, to talk to people they would never otherwise be able to access, sometimes because they're isolated, socially, geographically, whatever it may be, and it's so disappointing to me that that kind of part of the conversation was not had as we debated this particular policy.

JY:  So, what do you think some of the harms are for youth who can't access social media? What are young people losing out on? Who is harmed by these laws?

LO:  It's a great question. When we do a human rights analysis, we have to think about who's harmed by a particular policy, even if we think it's overall justified on a utilitarian ground, say it's better off for everyone overall who's harmed, is a really important question, and so much of that has been absent from this discussion. So it's not just me. It's like hundreds and hundreds of experts in Australia and organizations that represent many, many people, have provided commentary and input into this process and expressed many concerns about this policy, and there's a few different ways in which people are harmed. 

So the first thing, of course, is that if you require that age verification occur, you're engaging in a privacy violation for many people, there are cyber security risks with collecting that kind of information. There's deterrent effects and the like. Now that may not concern you, or you may think that's a justifiable kind of infringement on privacy rights, but I think that's worth mentioning. It is quite significant, especially in a world in which age verification doesn't tend to work very well on any measure. There are very serious cybersecurity risks that have been associated with age verification processes and the like. So it's certainly not nothing. The other set of people that are harmed are particularly vulnerable people. 

There's a variety of people who are still accessing social media. So it looks like about seven in ten of young people on the early data who had social media accounts are still accessing social media now. Now these are early figures, so there's a lot to be said for looking at how this works in a year's time, for example. But I think one of the interesting things to think about is when those people, young people, who are on still on social media—in breach of this ban or in defiance of this ban, however you want to put it—might need to engage in help seeking behavior, there may be a deterrent there, because they know that the law is they're not supposed to be accessing social media. So that is a selection of young people that we're particularly concerned about. And then, more generally, of course, there's a whole cohort of people who are particularly vulnerable. Maybe they're LGBTIQ, maybe they're in an isolated geographic area, far away from a city. Maybe they're experiencing harm at home and have no one to talk to about it. There's all sorts of ways in which young people use social media to manage their own challenges, harms, difficulties, and very effectively. They find people to talk to about their problems when other people may not be available to them. And that is an issue that is hard to map, right? We know that there's been an increase in calls to things like Kids Helpline, which does what it says on the tin. So those kinds of things have seen an increase. But I think that is something that is harder to map, but still very, very important, and may result in people going to other parts of the internet as well to seek help in different ways that might also not be very safe for them. 

More generally it's worth remembering that if platforms can say with some confidence, from a policy perspective, that young people are no longer on their platform, there is less incentive to design for them as well, which is another associated problem. Now, it remains unclear as to how platforms are dealing with that issue, especially in light of the most recent data, which suggests that a lot of young people remain on the platforms. But that's an issue. Do we then allow platforms to no longer design in a way that respects the autonomy of young people, the safety of them, their security and the like, because they have special needs and interests and all those sorts of things. So that's another problem. There's lots of operational problems. There's lots of conceptual ones. I don't think many of these have been considered or accounted for in the process.

JY: Absolutely, those are the same things that worry me as well. Okay, let's talk about the campaign. So what has the pushback to this, to the law, looked like, and what changes were you calling for?

LO: Well, if I can Jillian, what I might start with is where the push came from. Because I think that's quite instructive. One of the key sets of institutions that were pushing for this ban were mainstream news organizations, and we're learning a bit more about this over time, but the Murdoch press and other large news organizations in Australia—Australia has one of the most concentrated media environments in the world—were pushing for this ban. There was a petition run on one of their websites that was gathering tens of thousands of signatures. There were also others. Then there was a lot of advocacy towards specific kinds of political leaders in the country, and then a kind of competitive race to see who could be the most extreme in terms of putting forward a policy. But it's certainly the case that this very powerful set of actors in our democracy, at least, were a key driver of this campaign for a social media ban for young people. Now, I think there's a sense of moralism about it, a sense of desperation about it, tapping into genuine fears from parents, you know, and the like. And you know, The Anxious Generation, the book by Jonathan Haidt, has obviously been very influential with many people, but the research is still a bit unclear, right? About what this all means. And lots and lots of researchers will tell you that that book isn't making a reasonable argument based on the data that we have, right? So, it's a very febrile environment for this kind of discussion, and those kinds of institutional actors were incredibly important in getting this on the political agenda.

We then had an electoral campaign, definitely a vision that conservative politics would push for this. So labor politics, you know, center left politics pushed for it, and won the election, right? Not on this issue alone, but it was in that environment in which this policy was developed. There was a very small amount of time for submissions, for policy discussion about it. Initially, the government had said they weren't going to do it because they were concerned that the age verification technology wasn't up to scratch. That changed very, very quickly, and then the policy was introduced. I think it was in six days, some very small amount of time. So many different child rights organizations, academics, institutions, filed policy submissions to discuss this, did a lot of advocacy work, but the passage of time between the announcement of the proposal and the passage of the legislation was extremely short, and what followed has been a year of discussion around whether this was a good thing, a year of testing age verification technology, often finding it wanting, but setting up a set of of preferred providers that platforms could use in order to satisfy the legislative requirements. A lot of lobbying from platforms as to whether they're in or out. There was a big discussion about whether YouTube should be in or out. And a lot of back room dealing between relevant politicians and big tech companies. So the whole thing is very unseemly, and we're now in the world where it's been introduced, a lot of failure for it to actually operationalize now. Now, it may be that that changes over time, but that's quite telling, right? 

It's telling also because I don't think all parents particularly like this proposal either. It's very popular, but there's certainly a section of parents that are facilitating their children's continued access to social media. And I think that's interesting in itself. Part of what it is—something we were talking about actually earlier in our conversation—people don't like governments telling them how to parent their children. That has taken some very negative expressions in parts of the world, you know, resistance to things like the availability of medicine and treatment for kids who might be trans. But in this context, it's like, “I'm not going to let the government tell me that I can't let my kid on social media.” So, I don't think it's clarified much in the debate in terms of understanding how platforms behave towards young people, what they could do better, of which there's many things, and then how we get to the world in which children are able to be online but better protected. I'm not sure this proposal has contributed to that. It's really muddied the waters about what the government is capable of doing, what it should be doing, and what platforms, you know, what should be the process that platforms go through when thinking about designing for children.

JY: That's such a great answer. Thank you. And actually, that brings me to another question, which is so in your ideal world, taking this law, being able to throw it out the window if you want…What would you what would you want to see, not just from social media, but from from the platforms, from governments, both for the sake of youth, but also, you know, for all of us.

LO: I think that is the exact right question to be asking, and it's a good time that we've managed to talk now, because actually, in the interim, what's come out is at the first draft that we've got of a Children's Online Privacy Code. And to me, that is really revealing, because it is designed to apply to all services that might be accessed by children, like all online services, and it has a really kind of sophisticated understanding of what consent might look like, where you need help with getting consent, when it comes to parents or adults that are supportive in your life. And then at different ages that might look a bit different, like you might get notified if consent has been refused by your caregiver, for example, if you've wanted to do something. So there's a more sophisticated understanding of what consent looks like, and a range of different restrictions on when private, when personal information can be collected and used.

It's got things in it that I don't particularly like. I would like to see a prohibition on the commercial exploitation of children's personal information, because I don't think any targeted advertising is justified, for example. And I think that kind of measure of that commercial exploitation is hugely problematic. I think we have to think about what deletion looks like. I think you should have a right to deletion, for example. But you know, we also have to respect that children grow into young adults, that making decisions at 16 might look quite different to when they're three. So what you do with their personal information, how they carry that forward into their adult lives might be different depending on the age and so that kind of privacy reform actually is the fundamental thing. I’m sure your listeners don’t need reminding of this.

That is my favorite right. Because I think restricting access to personal information is a rights-respecting way to improve the online environment for everybody. And what I think is really interesting about this Children's Online Privacy Code that is still in draft form, is that all these things should be available to adults as well. Like adults in Australia don't have the right to deletion at the moment. We don't have a right to comprehensively know where our information has traveled and to delete it. You know, look, we have fewer rights than Californians, for example, certainly fewer rights than Europeans. What this code has highlighted is that, in fact, all people should be enjoying this kind of protection that comes from restricting access and use of personal information and giving people more control over that, because that personal information is the raw material of the business model, and it leads to a very loose approach to its collection and leads to many negative downstream consequences, I would argue, including business models that prioritize engagement, that prioritize and monetize polarizing, extremist content, mis- and disinformation.

I think we could have a real crack at trying to ameliorate some of these problems, or certainly reduce their impact, if we started that fundamental raw material that fuels the business model. So that, I think, is a really telling alternative that we're now considering as a society, and I like to think that people will come to an understanding that you can you can find ways to elevate improve the online world, particularly for young people, without restricting their access to that online world in a way that is empowering for them, rather than patronizing or infantilizing. 

JY: I completely agree, and I think it's funny that people often see privacy and expression at odds with each other, when actually I think privacy enhances expression.

LO: I think it makes spaces safer, makes people freer to be able to say what they think, but also to have those discussions in ways that are more meaningful, that can help find connections, even across divisions, rather than exploiting that division for profit, which is so much of the current business model.

JY: Are there any other things happening in Australia that EFF’s readers should know about?

LO: Well, we're about to go through the second tranche of our privacy reform. So we did engage in our first tranche of privacy reform. We have a Privacy Act that was passed in 1988 and hasn't been meaningfully updated in the decades since. So we got a few small changes, which included the enabling provision to allow a Children's Online Privacy Code to be developed, which is why we're getting the benefit of that now. But we're about to see a range of different privacy laws introduced. What the content is, of course, will be the subject of a lot of discussion and debate. We're going to argue for the right to deletion, the right to a private right of action for privacy harms, better processes for consent, and improved definitions of personal information to really bring Australia in line with lots of other similar jurisdictions around the world. And we're really keen to advance that for all the reasons that I just mentioned. 

The other big change that I think is coming is that, you know, which is perhaps more on topic for this conversation, is that we've had this online safety policy that is constantly being touted as the first in the world, and world leading and this and that, and it's really been a very flawed and vexed process working out how we could develop codes that were designed to govern how certain services were provided in the digital age, in line with safety expectations. There’s been a lot of focus on complaints and take down notices and things like that, there's obviously been that vexed litigation with Elon Musk, trying to get him to take down a particular video, and ultimately, the failure of our regulators to succeed on that front, I think, probably correctly, because giving a regulator in Australia the right to take down content from anywhere in the world seems to me a very concerning development, if that was allowed to proceed. So this history of online safety, it's been a big part of successive Australian governments’ identities. We're about to see the introduction of a digital duty of care. So that's certainly the stated position of government. What that looks like in practice, I think will be really interesting. 

I like the idea of a digital duty of care. I like the idea of a flexible, overarching concept. What the content is, though, will be really important. So what I would like to see is proactive disclosure of harm or risk of harm, and then actions taken by platforms to do it. So more onus on platforms to provide transparency about what they know about how their online spaces are being used and what might be harmful. I mean, there's a question around whether we'll see an introduction of a civil right, something similar following from the litigation that’s taken place in California and New Mexico, and that is going to be leading, really, multiple claims that are being made all around the country in the US, against companies like Meta and Google and other social media platforms. So I think there may be a flow-on effect from that, as in, it might turn into a civil right to sue for failure to meet the requirements of digital duty of care. But I'm really interested to hear from any of your listeners, or anyone who's working in this space about what the content should be of that digital duty of care, because there's obviously limits as well. Like it can be not rights-respecting, and we're interested in making sure that's not the case. And I think there's probably a range in which it could be more protective or less and working out how to do that—there are examples from around the world, but that's going to be something I reckon we could use help with that we want to get right and make use of that opportunity as best we can. 

The last thing I'll say, I suppose, is that our government is always looking for ways to deal with mis- and disinformation, and that comes with real risks of censorship. And so, I think there's a strong argument to focus on privacy reform, because it's a rights-respecting reform as an antidote to mis- and disinformation. Greater transparency on platforms—I think about how they prioritize content in your feed, for example, can be useful, or reporting on what content is really popular, like ad libraries. There's all sorts of ways in which we can introduce greater transparency, but I do worry that as governments around the world feel emboldened to do so, they might look for more ways to to remove content, to be more involved in content moderation policies that have the real potential to to become censorship if we're not careful. So that's the other abiding concern I've got about Australian policy at the moment.

JY: One of my big concerns now too, is all of these authoritarian governments watching Australia, watching the UK, and enacting laws that are modeled on, but much more severe than than the ones in those places? Do you share that concern? 

LO:  Yeah. I mean, the other way in which it's come about in Australia, certainly like anti-doxxing laws, which, at the moment, we've got laws on our books that came about attached to a privacy reform. I'm hesitant to say it's a privacy reform, because it's not, but it's very egregious. It's a criminal offense to disclose basic details about someone online, if it's done with a set of intents and the like, about their particular status as a group, and that, I think you could drive a truck through in terms of how you could interpret it, right? There's such a wide variance, and bringing a proceeding against someone like prosecuting them for that is such a life altering experience. And I think if governments did want to focus on particular activists. And I'm particularly thinking of, you know, the way it was framed was certainly around the the discussion and debate about the genocide unfolding in Gaza. Like, I think, particularly about that movement, they're very vulnerable to crackdowns by government for speech that is perceived to be unacceptable by government. 

And I'm not even trying to debate it. I think there's certainly antisemitic commentary occurring in Australia, and indeed, there have been some people, like genuine Nazis arrested, which, you know is, is a different kettle of fish. But I think progressive movements, not just the defense of Palestine movement, but lots of other progressive movements are a particular risk of those kinds of laws. But I think mis- and disinformation is the other vehicle. So we have to be very careful about giving platforms, giving regulators both the mandate and then the authority to police content based on particular criteria. And often what they talk about, or they talked about in proposals that have now died in Australia, were things like public health issues. So, you know, that's a particular consent that drives a lot of people who are very concerned about the years of Covid up the wall. So it inspires a lot of reaction to it. But I think there's lots of ways in which undermining political stability is put forward as a proposal, as a justification for removing content. That's just so broad that I think you could really start to see censorship. It's just not good enough. I just don't think we can tolerate those kinds of proposals. I like to think that's not the case in Australia, but I just think there's a tendency among governments now to see this as an opportunity. It's an anxiety lots people have about mis- and disinformation, and so they draw on that as a mandate to act. And I think we should be very cautious about those proposals.

JY: Definitely. Okay, I’m going to ask the final question that I ask everyone. Who is your free speech or free expression hero? Or someone from history, or even someone personal who has influenced you?

LO: There’s a chapter in my book where I talk about the Paris Commune, which happened a long time ago, but I still think it’s a really interesting experiment in applied democracy. This is when a bunch of communauts took over Paris and started doing things differently in a variety of different ways. Gustave Coubert is this artist who’s leading the artist collective during this time, and I always found him entertaining because he would paint things that weren’t expected. So, often, nudes that were considered quite scandalous because they were everyday women who weren’t angelic or Madonna-esque in their style, but he’s got a very famous painting of female genitalia—

JY: Yes! Facebook took it down! [laughs]

LO: Exactly. It’s always been a very confrontational image. People find it sexist sometimes, because they think it’s very pornographic. I understood it differently. It’s called “The Origin of the World,” so I sort of see it as a force of giving life. Interpret however you like, the point is that Facebook couldn’t tolerate it and took it down. There’s a nice little bit of litigation where a schoolteacher had a page where he was teaching people that art, and Facebook could just not tolerate this art. In my mind, it was so telling that a communaut from hundreds of years before was basically revealing, as an expert troll almost, how conservatives—someone like Mark Zuckerberg—view, and how he shapes these platforms. And how they subtly reshape what we think is appropriate, what we think is free, what we think is within the realms of good society. And that you really do need artists telling you that that might not be true, and they’re some of the most effective actors at revealing that about those who hold power, like reshaping our understanding about what acceptable debate is, and how we can show power to be exercised in our online world, where in other circumstances it might be quite okay.

I love that story, and I love the communauts. There’s a lot of beautiful writing about them, there’s a beautiful book called Communal Luxury where they talk about all the different ways in which they were trying to reimagine their society and do it collectively, from things like having the first union of women but also having the design of clothes and furniture look different. I want to see a world in which people take that power in both the micro and macro and start to reshape their society in really creative ways. And I feel like digital technology has the real capability of allowing that to occur and I want to revive that sense of concrete democracy rather than just delegated democracy or deferred representative democracy where you tell someone else what you want but don’t have a say in a lot of decisions. And so, that really grassroots idea of democracy is something, and I think we’re in a world in which that could really occur with the assistance of digital technology. It’s a matter of working out how to bring it into being. And that’s what I see this movement as doing. People with digital rights as being their primary concern are trying to recreate that world so that there’s more communal, collective spaces for discussing what the future should look like.

Pages