Feed aggregator
Israel Hacked Traffic Cameras in Iran
Multiple news outlets are reporting on Israel’s hacking of Iranian traffic cameras and how they assisted with the killing of that country’s leadership.
The New York Times has an <a href="https://www.nytimes.com/2026/03/01/us/politics/cia-israel-ayatollah-compound.html"<article on the intelligence operation more generally.
The Government Uses Targeted Advertising to Track Your Location. Here's What We Need to Do.
We've all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You're right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples' locations, new reporting has confirmed.
For years, the internet advertising industry has been sucking up our data, including our location data, to serve us "more relevant ads." At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.
Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.
The document acknowledges that a program by the agency to use "commercially available marketing location data" for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we'll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.
Advertising Surveillance Enables Government SurveillanceThe online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us.
In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.
We’ve known for years that location data brokers are one part of federal law enforcement's massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.
But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.”
Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations.
The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers' knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps.
How Real-Time Bidding WorksRTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics.
As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
What You Can Do To Protect YourselfRevelations about the government's exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:
- Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps.
- Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location.
For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest.
What Tech Companies and Lawmakers Must DoLegislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.
Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:
- Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
- Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Advertising IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the "Ask App Not to Track" pop-ups), 96% of U.S. users opted out, essentially disabling advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.
Lawmakers also need to step up to protect their constituents' privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.
Legislators can and must also close the "data broker loophole" on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you've been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden's EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.
Online behavioral advertising isn’t just creepy–it’s dangerous. It's wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising has become.
The Government Uses Targeted Advertising to Track Your Location. Here's What We Need to Do.
We've all had the unsettling experience of seeing an ad online that reveals just how much advertisers know about our lives. You're right to be disturbed. Those very same online ad systems have been used by the government to warrantlessly track peoples' locations, new reporting has confirmed.
For years, the internet advertising industry has been sucking up our data, including our location data, to serve us "more relevant ads." At the same time, we know that federal law enforcement agencies have been buying up our location data from shady data brokers that most people have never heard of.
Now, a new report gives us direct evidence that Customs and Border Protection (CBP) has used location data taken from the internet advertising ecosystem to track phones. In a document uncovered by 404 Media, CBP admits what we’ve been saying for years: The technical systems powering creepy targeted ads also allow federal agencies to track your location.
The document acknowledges that a program by the agency to use "commercially available marketing location data" for surveillance drew from the process used to select the targeted ads shown to you on nearly every website and app you visit. In this blog post, we'll tell you what this process is, how it can and is being used for state surveillance, and what can be done about it—by individuals, by lawmakers, and by the tech companies that enable these abuses.
Advertising Surveillance Enables Government SurveillanceThe online advertising industry has built a massive surveillance machine, and the government can co-opt it to spy on us.
In the absence of strong privacy laws, surveillance-based advertising has become the norm online. Companies track our online and offline activity, then share it with ad tech companies and data brokers to help target ads. Law enforcement agencies take advantage of this advertising system to buy information about us that they would normally need a warrant for, like location data. They rely on the multi-billion-dollar data broker industry to buy location data harvested from people’s smartphones.
We’ve known for years that location data brokers are one part of federal law enforcement's massive surveillance arsenal, including immigration enforcement agencies like CBP and Immigration and Customs Enforcement (ICE). ICE, CBP and the FBI have purchased location data from the data broker Venntell and used it to identify immigrants who were later arrested. Last year, ICE purchased a spy tool called Webloc that gathers the locations of millions of phones and makes it easy to search for phones within specific geographic areas over a period of time. Webloc also allows them to filter location data by the unique advertising IDs that Apple and Google assign to our phones.
But a document recently obtained by 404 Media is the first time CBP has acknowledged the location data it buys is partially sourced from the system powering nearly every ad you see online: real-time bidding (RTB). As CBP puts it, “RTB-sourced location data is recorded when an advertisement is served.”
Even though this document is about a 2019-2021 pilot use of this data, CBP and other federal agencies have continued to purchase and use commercially obtained location data. ICE has purchased location tracking tools since then and recently requested information on “Ad Tech” tools it could use for investigations.
The CBP document acknowledges two sources of location data that it relies on: software development kits (SDKs) and RTB, both methods of location-tracking that EFF has written about before. Apps for weather, navigation, dating, fitness, and “family safety” often request location permissions to enable key features. But once an app has access to your location, it could share it with data brokers directly through SDKs or indirectly (and often without the app developers' knowledge) through RTB. Data brokers can collect location data from SDKs that they pay developers to put in their apps. When relying on RTB, data brokers don’t need any direct relationship with the apps and websites they’re collecting location data from. RTB is facilitated by ad companies that are already plugged into most websites and apps.
How Real-Time Bidding WorksRTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your information, including location data, to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (or the adtech companies that represent them) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. For example, the FTC found that location data broker Mobilewalla collected data on over a billion people, with an estimated 60% sourced from RTB auctions. Leaked data from another location data broker, Gravy Analytics, referenced thousands of apps, including Microsoft apps, Candy Crush, Tinder, Grindr, MyFitnessPal, pregnancy trackers and religious-focused apps. When confronted, several of these apps’ developers said they had never heard of Gravy Analytics.
As Venntel, one of the location data brokers that has sold to ICE, puts it, “Commercially available bidstream data from the advertising ecosystem has long been one of the most comprehensive sources of real-time location and device data available.” But the privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast the average person’s data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately exploited. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
What You Can Do To Protect YourselfRevelations about the government's exploitation of this location data shows how dangerous online tracking has become, but we’re not powerless. Here are two basic steps you can take to better protect your location data:
- Disable your mobile advertising ID (see instructions for iPhone/Android). Apple and Google assign unique Advertising IDs to each of their phones. Location data brokers use these advertising IDs to stitch together the information they collect about you from different apps.
- Review apps you’ve granted location permissions to. Apps that have access to your location could share it with other companies, so make sure you’re only granting location permission to apps that really need it in order to function. If you can’t disable location access completely for an app, limit it to only when you have the app open or only approximate location instead of precise location.
For more tips, check out EFF’s guide to protecting yourself from mobile-device based location tracking. Keep in mind that the security plan that’s best for you will vary in different situations. For example, you may want to take stronger steps to protect your location data when traveling to a sensitive location, like a protest.
What Tech Companies and Lawmakers Must DoLegislators and tech companies must act so that individuals don’t bear the burden of defending their data every time they use the internet.
Ad tech companies must reckon with their role in warrantless government surveillance, among other privacy harms. The systems they built for targeted advertising are actively used to track people’s location. The best way to prevent online ads from fueling surveillance is to stop targeting ads based on detailed behavioral profiles. Ads can still be targeted contextually—based on the content people are viewing—without collecting or exposing their sensitive personal information. Short of moving to contextual advertising, tech companies can limit the use of their systems for government location tracking by:
- Stopping the use of precise location data for targeted advertising. Ad tech companies facilitating ad auctions can and should remove precise location data from bid requests. Ads can be targeted based on people’s coarse location, like the city they’re in, without giving data brokers people’s exact GPS coordinates. Precise location data can reveal where we work, where we live, who we meet, where we protest, where we worship, and more. Broadcasting it to thousands of companies a day through RTB is dangerous.
- Removing advertising IDs from devices, or at minimum, disabling them by default. Advertising IDs have become a linchpin of the data broker economy and are actively used by law enforcement to track people’s location. Ad IDs were added to phones in 2012 to let companies track you, and removing them is not a far-fetched idea. When Apple forced apps to request access to people’s advertising IDs starting in 2021 (if you have an iPhone you’ve probably seen the "Ask App Not to Track" pop-ups), 96% of U.S. users opted out, essentially disabling Advertising IDs on most iOS devices. One study found that iPhone users were less likely to be victims of financial fraud after Apple implemented this change. Google should follow Apple’s lead and disable advertising IDs by default.
Lawmakers also need to step up to protect their constituents' privacy. We need strong, federal privacy laws to stop companies from spying on us and selling our personal information. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move.
Legislators can and must also close the "data broker loophole" on the Fourth Amendment. Instead of obtaining a warrant signed by a judge, law enforcement agencies can just buy location data from private brokers to find out where you've been. Last year, Montana became the first state in the U.S. to pass a law blocking the government from buying sensitive data it would otherwise need a warrant to obtain. And in 2024, Senator Ron Wyden's EFF-endorsed Fourth Amendment is Not for Sale Act passed the House before dying in the Senate. Others should follow suit to stop this end-run around constitutional protections.
Online behavioral advertising isn’t just creepy–it’s dangerous. It's wrong that our personal information is being silently harvested, bought by shadow-y data brokers, and sold to anyone who wants to invade our privacy. This latest revelation of warrantless government surveillance should serve as a frightening wakeup call of how dangerous online behavioral advertising has become.
New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatories
When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.
And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.
Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.
The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)
This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.
“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.
“In the past decade, gravitational wave astronomy has progressed from the first detection to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”
“Pushing the edges”
Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.
A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.
Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.
The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.
“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”
Unusual signals
The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.
For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.
“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”
Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.
Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.
The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other.
“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”
Cosmic connections
From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.
“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.
This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.
The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.
“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”
Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.
“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”
The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.
Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.
“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”
By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).
“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.
“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."
Trump’s Iran strikes boost China’s energy edge
Why the Iran war could spur higher emissions
Judge pauses climate case pending Supreme Court ruling
FEMA releases disaster aid after Noem is berated by GOP senator
EPA hands off Energy Star to DOE
White House ‘looking under every rock’ to lower gas prices
This dark money group targeted Biden environmental rules. Now its members do it for Trump.
Minnesota eyes climate superfund amid federal rollbacks
Tom Steyer’s tech and AI plan
‘It’s too warm’: Greenland’s fishermen are under threat from climate change
Hacked App Part of US/Israeli Propaganda Campaign Against Iran
Wired has the story:
Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.
The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks...
Speaking Freely: Shin Yang
*This interview has been edited for length and clarity.
David Greene: Shin, please introduce yourself to the Speaking Freely community.
Shin Yang: My name is Shin Yang. I am a queer writer with a legal background and experience in product management. I am the steward of Lezismore, an independent, self-hosted, open-source community for sexual minorities in Taiwan. For the past decade, I have focused on platform governance as infrastructure, with a particular emphasis on anonymity, minimal data collection, and behavior-based accountability, so that people can speak about intimacy and identity without fear of extraction or exposure. I am a community architect and builder, not an influencer. I’ve spent most of the past decade working anonymously building systems, designing governance protocols, and holding space for others to speak while keeping myself in the background.
DG: Great. And so let’s talk about how that work intersects with freedom of expression as a principle, and your own personal feelings about freedom of expression. And so with that in mind, let me just start with a basic question, what does freedom of expression mean to you?
SHIN: For me, free expression is about possibility, and possibility always contains both and even multiple ends, the beautiful ones and the brutal in equal measure. Maybe not that equal, but you cannot just speak about the beautiful or good things. I think it's not about pushing discomfort out of the room. If we refuse all discomfort, we end up in echo chambers, which is safe, predictable; but dead. What matters to me is the equipment and principles: Who carries through that discomfort, self-discipline, mutual support, and the infrastructure and governance that can let people grow over time. Keeps a workable gray space open: room to make mistakes, learn, repair, and keep speaking.
DG: How does that resonate with you personally? Why are you passionate about that?
SHIN: Around 2013 in Taiwan's context, when Facebook started to take over the digital ecosystem in Taiwan, many local independent bulletin boards (BBS) that had been formed for sexual minorities were shut down because they had no income from advertisements, and people were pushed into mainstream platforms—like Facebook, Instagram, Meta, whatever, Twitter now X—where sexual expression was usually reported or flagged, and where I watched sharp intra-community exclusionary voices saying “bisexual and trans people were not pure enough”, or that talking openly about sex would harm our image, or that it was inappropriate to children, or it would invite harassment. Those oppressions are even fiercer within the queer community itself, which is self-censoring in order to gain approval from mainstream society.
So, the community itself says that the best way to do it is don't talk about it. Never talk about it. Never mention a single thing about it. It was a wakeup call for me, because I think it's not right. And also, there's another more private story for me, it's a story I heard from our sexual minority community. I once heard about a butch student who was sexually assaulted by a group of men because she dated a beautiful classmate, a beautiful woman in the class.
And when I learned what happened to her, that story changed my focus. Because, you know, when people hear this kind of story, they always focus on punishing those men, punishing those criminals—but what matters for me most is building conditions where someone like her could someday still have a chance at intimacy on her own terms, and finally be free from fear. That's more important for me. I may never meet her, but I know who I am and what I'm here to build. I have been building an infrastructure –– not just “safe space” as a slogan, but an “ecospace” designed to make survival and growth possible. So that's why I believe that a well-governed space is what matters for communities now.
DG: Why is it so important for sexual minorities to have forums where they can communicate in that way? When it was just the bulletin boards, before social media, what worked really well and what didn’t work well?
SHIN: That’s a wonderful question. Okay, the bulletin boards I used before, the registration process doesn't require a lot of information. You just need email.
What I miss about bulletin boards is the sense of structure. You didn’t enter a personalized feed—you entered a place with visible rooms and topics. Even in the spaces you visited daily, you’d encounter views you didn’t like, and you had to live with that—and learn how to argue, or leave, or build something parallel. In some boards, moderators were community-chosen, which created a practical kind of participation—not perfect democracy, but civic practice.
You have to provide the information of which school you are in, because it's based on school. But it's not that difficult to use that. And also they have some kinds of logistics, like you log into different boards with different topics, and you can see that there are huge topics along with several small topics. So when you log into that, you can sense and feel the whole structure of that community. It's not a personal feed bombing you with everything you like. So you know, even in the board you’re most likely to visit every day, you will definitely encounter some speech you don't like, and you argue with them, you fight with them, or build something parallel, that's the civic foundation of democracy. You experience the everyday practice of civic democracy. People can vote for moderators or even recall them.
DG: You mean, the community can ask them to leave the bulletin boards?
SHIN: No, they don't actually leave the bulletin board. It's more that the moderator no longer has the right to perform administrative tasks, but they can still be part of the community, and ordinary users can vote in the election for this.
DG: Okay, and then what were the shortcomings of the bulletin boards?
SHIN: Yeah, it’s brutal. Really brutal. And I’ve seen people literally organize to push others out. I didn’t expect this to turn into story time, but I actually love this. So—back in Taiwan, we had this big BBS forum called PTT. There was a board called the ‘Sex’ board, where people could talk about sexual topics and share sexual health info. But around 2010, the space was dominated by mainstream straight cis men. And whenever a woman or a sexual minority posted anything, they often got harassed or attacked. So, women created another board inside the forum—basically a separate space—called ‘Feminine Sex.’ And from then on, the original Sex board and the Feminine Sex board were in conflict all the time. And honestly, if this happened today on Facebook, Threads, or X… we’d just block each other. Easy. Clean. Done.
But the problem is: when blocking becomes the default, we don’t really learn how to argue well, how to organize our reasons, or even how to sit with discomfort and understand why the other side thinks the way they do. We lose that practice—because it’s just so easy to delete people from our world now. I’m not saying blocking is always wrong. But there’s a trade-off.
DG: I get that. Then when Facebook and the other social media platforms that followed came along and the users migrated over to the commercial services, what was lost?
SHIN: What was lost? I think our behavior got shaped—personal branding became the default setting for joining an online community. If you don't do it, like me, you basically don't exist. Influence can be shaped by the number of social media followers; people define each other based on this. Choosing not to obey the logic of mainstream platforms means being unseen, and being unseen means having no influence.
And sure, personal branding can be useful—but I don’t believe it’s the only way to express yourself or connect with a community. The problem is, on mainstream platforms, the whole system is built for visibility. So clout becomes the game. Look at what they push: stories, reels, short-form visuals. And as a former product manager, I can tell you—this is not accidental. It’s designed. It’s designed around human nature: to avoid friction as much as possible. So they keep you scrolling, to make reacting effortless. One tap and you’ve sent a smiley face. Engagement becomes easier… but also cheaper.
And the scary part is, people start thinking that’s the whole internet. It’s not. But the more we get trained by these interfaces, the harder it becomes to even imagine other ways of building community. It is becoming more difficult for people to imagine that the "right" amount of friction can actually help us to grow, and coexist with the diversity.
DG: So did you find that there were certain things you couldn't talk about on Facebook or on the other social media platforms because they were sexual, because sexual speech was not as welcome as it was earlier?
SHIN: Yes, when I first started building my community, I knew nothing about technology. Like everyone else, I just created a fan page on Facebook, which was then flagged and deleted. This happened. I think it still happens to this day. At first, I was so angry about it. I felt it was unjust. But every time I wrote to Facebook, they just said that I had violated the user terms. At first I was furious. But I don’t stop at anger. I dig deeper. I thought, “Why do you say I violated the user terms?”
I read the terms, compared policies across platforms and applications, and realized the pattern: All of the terms of use forbid adult or erotic content in fine print. Because these are profit-driven systems optimized to minimize legal and business risk. So, I don’t frame it as “evil platforms.” I frame it as incentives. Once I understood this, I realized that we should not only protest and ask those big tech platforms to “give” us a voice –– that's a good approach, but it shouldn't be the only one. I believe we should build our own community. That's why I started researching open-source software and building my own self-hosted community.
DG: Please talk a little bit more about what you're building, and how what you're building is consistent with your view of free expression.
SHIN: Sure. It’s a long process but the reason why I use open-source software is, for a person knowing nothing about technology, I can come to the open-source community and ask questions about it. It’s more reliable than building it myself.
And the second example is about how I designed Lezismore’s registration and community access, mostly through trial and error.
We don’t require any real-name or ID verification. In fact, you can register with just an email. But instead of “verifying people,” we redesigned the "space".
Lezismore is built as a two-layer structure. The main website is searchable, but it looks almost… boring on purpose—advocacy articles, writers’ posts, slow content. The truly active community space is inside that main site, and the entry point is not something you casually discover through search. Most people learn how to get in through word of mouth. We also block search engines, bots, and crawlers from the community area. So from day one, we gave up visibility on purpose—we traded reach for resilience.
Then there’s the onboarding. New users go through an “apprenticeship” period. You can’t immediately post, comment, or DM people. You first have to read, observe, and understand how the community works. We don’t even tell you exactly how long it takes—you just have to be patient. In the fast-content era, people constantly complain that this is “annoying” or “hard to use.” And yes, it is friction indeed.
But that friction buys something valuable: a space that can stay anonymous, inclusive, and high-trust—without being instantly overwhelmed by harassment or bad-faith users. It also means we don’t need to depend on Big Tech’s third-party verification APIs. With relatively low technical cost, we’re using governance design—not data collection—to balance inclusion and protection.
And honestly, as a platform owner, I have to be real about what users “actually” need. If this was truly “just terrible UX,” the site wouldn’t survive in today’s hyper-competitive platform environment. But Lezismore has been running for over a decade, and we still have tens of thousands of people quietly reading and interacting every month. This is one of the biggest tradeoffs in my governance design. In an attention economy, choosing low visibility is a bold decision, and maintaining it has a real cost.
On top of that, we rely on human, context-based moderation. We use posts, replies, and Q&A threads to actively teach community norms—why diversity and conflict exist, how to handle risk, and how to protect yourself. Users also share practical safety tips and real interaction experiences with each other. There are many more small mechanisms built into the system, but that’s the core logic.
And there’s one more layer: the legal environment. In Taiwan, the legal climate around sex and speech can create chilling effects for smaller platforms. Platform owners can be criminally liable in certain scenarios. That’s exactly why governance design matters—it’s how we keep lawful expression possible without over-collecting data.
DG: Ah, so you need to be careful. I’m curious whether you’ve had any examples of offline repression. Do you have any experiences with censorship or feeling like you didn’t have your full freedom of expression in your offline experiences? Any experiences that might inform what an ideal online community might look like?
SHIN: Yes—actually, most of my earliest experiences with repression were offline, and they shaped how I later understood the internet as an escape route.
Back when I was a high school student, I was already involved in student movements and gender-related advocacy. One very concrete example was dress codes. The school restricted what female students could wear, and students organized to push for change. At one point we even had a vote—something like 98% of students supported revising the policy. But when the issue entered the “official” system, the administration simply ignored it. They bypassed procedure, dismissed the consensus, and used authority to shut it down completely.
That was my first clear lesson about repression: it’s not always someone telling you “you’re forbidden to speak.” Sometimes it’s a system designed so that even if students, women, or sexual minorities spend enormous effort building agreement, once our voices enter the institution, they can be treated as if they don’t exist.
That’s why, in the early 2010s, online space became my breakthrough. This was still the blog era, before social platforms fully standardized everything, and even before “share” mechanisms were built into everyday activism. I started experimenting with things like blog-based petitions, and a lot of students joined. The internet became a way to bypass institutional gatekeeping.
In college, I saw another layer. There was serious sexism from people in authority—military-style discipline officers, some teachers, and administrators. When gender-related controversies happened on campus, the media sometimes showed up and reported in ways that were harmful: exposing people, sensationalizing stories, and ignoring the realities of sexual minority students. Meanwhile, the administration would shut down student demands with authority, and at the same time use incentives and pressure behind the scenes, especially around housing or “benefits”—so some student representatives were afraid to speak honestly in meetings.
And this was before livestreaming was a normal tool. But even then, I was already using audio-based live channels to connect students across campuses. Online networks became a lifeline for young advocates, especially those of us who didn’t “fit” the institution and needed each other to survive.
I came from a literature background. I had zero technical training at the beginning. But I’ve always been the kind of person who loves trying new technology. And I was lucky, because I was born in that strange window when the internet was rapidly expanding, but not yet fully swallowed by Big Tech. So, I grew up in this tension between nostalgia and innovation, and I kept pushing, resisting, and experimenting. I’ve experienced both sides of speech: how beautiful freedom can be, and how terrifying it can become.
DG: Going back to Lezismore, I’m curious: When you ask people to observe before they post, what are you hoping they learn about the community before they more actively participate in it?
SHIN: I hope people understand that this is a community rather than a dating app focused on results. The community needs people to support and nurture each other. Some people see us as a dating app and expect a frictionless experience; naturally, they are disappointed. If you're only looking for a fast-food relationship, that's fine. Here, however, it is a community that offers more than just hooking up. The design focuses on words and a person’s behavioral history rather than just a photo. Dopamine bombing is not how we do things here.
We’ve also built a library of community safety notes, FAQs, and governance reminders over time. Some written by the team, some contributed by members. Not everyone reads them, and that’s fine. But the design makes it easier for people who want a slower, more intentional space to stay—and for people who want something frictionless to self-select out.
SHIN: I run the platform anonymously by design. People may know that there’s an admin called “Shin”, but I don’t associate a face or personal brand with the role because I don’t want the community to depend on my visibility for their trust.
We maintain a clear distinction between work and private life. Admin power is never a shortcut to social capital. In a sex-positive space, this boundary is a matter of ethics. The moment a founder’s identity becomes central, the space starts to orbit that person, and expectations, fan-service dynamics and power asymmetries creep in. Then speech becomes performance.
It also means I’m less “marketable” to attention-driven media—but that tradeoff protects the community’s integrity. Some media outlets only want a face and a persona. However, I accept this cost because I am trying to build a community that can thrive independently of an idol, where people relate to each other through behavior and shared norms, not proximity to the founder.
DG: It sounds like a lot of what you’re doing is about people being authentic on the site, not using personas or using it to create a personal platform for themselves for marketing purposes.
SHIN: Exactly, people can share links, but if a post is purely self-promotion with no contribution to the community, we don’t encourage this. I hope people here can respect the reciprocity.
DG: I want to shift a bit and talk about freedom of expression as a principle for a while. Do you think freedom of expression should be regulated by governments?
SHIN: Speech regulation is hard, because speech is freaking messy. And once you turn messy human speech into rules that scale, nuance gets flattened. Minority communities usually pay first, because large systems choose efficiency over lived reality.
I also don’t think the answer is “erase all conflict.” Some friction is the price of pluralism, and with good guidance and interface design, conflict can become a point of learning instead of a point of collapse. From a platform owner’s perspective, legal liability is real and often cruel. So if we expect platforms to be free, frictionless, allow everything we like, erase everything we dislike, and still amplify our visibility—then we’re really asking for magic. That’s why we need to talk seriously about alternatives and procedural safeguards, not just louder demands.
Age verification is a good example. I get that the goal is to protect minors. But identity-based age gates often turn into identity infrastructure. They chill lawful adult speech, concentrate gatekeeping power, and push everyone to hand over personal data just to access legal content. From my experience, there are other tools that can reduce harm with less damage—things like community design, visibility gating, and human, context-based moderation. Those approaches can protect people without building a personal-data checkpoint for everyone.
DG: You talked about minority voices, and minority speech. Are you concerned that any regulation will end up trying to silence minority speakers, or won’t benefit minority speakers. How are these speakers more vulnerable to speech regulations than others?
SHIN: Hmmm......a lot of minority speech is context-heavy. The same words can be support, education, or harassment depending on who says it and why. When regulation turns into broad categories, sexual health education, self-explore experiment sharing, trans healthcare discussions, or reclaimed language can be treated as “harmful” out of context (at both sides). So the risk isn’t only censorship, it’s misclassification at scale.
DG: Are there certain types of speech that don’t deserve the conversation. Some people might say that hate speech or speech that’s dehumanizing doesn’t deserve the conversation. Are there any categories of speech that you would say we shouldn’t consider, or do we get to talk about everything?
SHIN: Okay, I don't think the issue is about saying certain kinds of speech don't deserve to be discussed; the problem lies in the definition. As soon as we suggest that some speech doesn't merit discussion, some people will exploit this to silence their opponents. Whether it's right-wing, left-wing or anything else, if we say that we don't allow any kind of hate speech, the next thing someone will do is define your speech as hate speech. It's an endless war that draws us all into an eagerness to silence others and grab the mic, instead of creating more space for conversations and learning from each other.
We should go further than just regulation and create spaces where people can coexist in a grey area, endure some discomfort and engage with each other. I prefer this approach to trying to draw lines.
DG: So even well-intentioned restrictions might always be used against minority speakers?
SHIN: I wouldn’t say restriction is not good. There always has to be some kind of restriction, but people will always find a way to overcome or take advantage of it. So, the thing I believe is that regulation is regulation, but community should be an open-source archive. How we govern community, how we dialogue between each other when we disagree with each other…how can we create a space where those things can exist? I believe that those things should be open source. People always talk about open source like it’s just coding, but I believe governance should be open source too.
DG: So when you said before some restrictions are necessary but then we talk about open source governance, we’re talking about the same thing. When you say some restrictions are necessary, you’re not necessarily saying government restrictions, but that restrictions should come from somewhere else: that’s an open source governance model?
SHIN: Yes. And it should include restrictions in law, and how people deal with it, the way we deal with it. I’m not saying every rule or detection signal should be public. By “open-source governance,” I mean shareable governance playbooks: proportional steps, appeals templates, community norms, and design patterns that small communities can adapt. The goal is portability and adaptability of methods, not making systems easy to game. Because malice is always part of the environment.
DG: Is there anything else you want to say about your theory of open-source governance or what it means to you?
SHIN: I noticed there was a question in another interview about fostering transparency in social media, and how to appeal, and that the reason [for a takedown] should be more transparent. The interesting thing is that before our interview today I was joining a law and technology policy research group, and they’re reading a book called “Law and Technology: A Methodical Approach”. It's worth mentioning that it's very interesting. Apparently, scientists tend to place emphasis on complexity, which often trips up pragmatic reform efforts, so the recommendations often only call for greater transparency or participation.
I think this echoes what we were talking about before and the transparency thing. I heard this podcast in Taiwan about cybersecurity where they interview an outsourced ex-moderator from Meta and how the platform moderates speech. Because most of the information is confidential, the moderator can’t say too much, but she told us that every day Meta provided a whole set of lists with things they should ban, and every day it changes. Sometimes it even changes on an hourly basis. And they can never really put those fully transparent to the world. The reason they can’t do that is because those words are partially forbidding scams, because the scale is too big. So, when they show the transparency of how they ban things, the scammers will use this against them. Like, “now you’ve banned this word so I’ll just use another one.” It’s an endless war. So, I think transparency matters, but it shouldn’t be the only thing we think about, we should think about governance as well. And when we talk about governance, we shouldn’t just think about some high authority in government or a law just forcing the platform into something we like. We should go back and think about what we can do. We’ve got lots of open-source software now and we can literally build those things by ourselves. That’s what I’m trying to say.
DG: Okay, one last question. This is the last question we ask everybody. Who’s your free speech hero?
SHIN: This is the question I saw everyone answering, and I honestly struggled with it. Because I’m Taiwanese, and the names that often come up in U.S. free speech conversations aren’t the names I’m familiar with. I’m sorry about this.
DG: That’s okay, it doesn’t have to be a perfect answer.
SHIN: If you want a public figure from Taiwan, I think of the journalists and dissidents who pushed for press freedom during Taiwan’s democratization—Nylon (Tēnn Lâm-iông) is one name many Taiwanese recognize.
If I answer this as truthfully as I can, my hero is my family. My father taught me that integrity is not a slogan. It’s the ability to keep your ethics when it costs you something. My mother is the opposite kind of teacher: she’s relentless in a practical way: she doesn’t easily back down, and she keeps finding room to move even when the room is small. Put together, that’s what free expression means to me. It’s not “I can say anything.” It's about whether you can continue to think independently and live with integrity through layers of fear, pressure, temptation and coercion, while still moving forward and creating more possibilities for others.
Speaking Freely: Shin Yang
*This interview has been edited for length and clarity.
David Greene: Shin, please introduce yourself to the Speaking Freely community.
Shin Yang: My name is Shin Yang. I am a queer writer with a legal background and experience in product management. I am the steward of Lezismore, an independent, self-hosted, open-source community for sexual minorities in Taiwan. For the past decade, I have focused on platform governance as infrastructure, with a particular emphasis on anonymity, minimal data collection, and behavior-based accountability, so that people can speak about intimacy and identity without fear of extraction or exposure. I am a community architect and builder, not an influencer. I’ve spent most of the past decade working anonymously building systems, designing governance protocols, and holding space for others to speak while keeping myself in the background.
DG: Great. And so let’s talk about how that work intersects with freedom of expression as a principle, and your own personal feelings about freedom of expression. And so with that in mind, let me just start with a basic question, what does freedom of expression mean to you?
SHIN: For me, free expression is about possibility, and possibility always contains both and even multiple ends, the beautiful ones and the brutal in equal measure. Maybe not that equal, but you cannot just speak about the beautiful or good things. I think it's not about pushing discomfort out of the room. If we refuse all discomfort, we end up in echo chambers, which is safe, predictable; but dead. What matters to me is the equipment and principles: Who carries through that discomfort, self-discipline, mutual support, and the infrastructure and governance that can let people grow over time. Keeps a workable gray space open: room to make mistakes, learn, repair, and keep speaking.
DG: How does that resonate with you personally? Why are you passionate about that?
SHIN: Around 2013 in Taiwan's context, when Facebook started to take over the digital ecosystem in Taiwan, many local independent bulletin boards (BBS) that had been formed for sexual minorities were shut down because they had no income from advertisements, and people were pushed into mainstream platforms—like Facebook, Instagram, Meta, whatever, Twitter now X—where sexual expression was usually reported or flagged, and where I watched sharp intra-community exclusionary voices saying “bisexual and trans people were not pure enough”, or that talking openly about sex would harm our image, or that it was inappropriate to children, or it would invite harassment. Those oppressions are even fiercer within the queer community itself, which is self-censoring in order to gain approval from mainstream society.
So, the community itself says that the best way to do it is don't talk about it. Never talk about it. Never mention a single thing about it. It was a wakeup call for me, because I think it's not right. And also, there's another more private story for me, it's a story I heard from our sexual minority community. I once heard about a butch student who was sexually assaulted by a group of men because she dated a beautiful classmate, a beautiful woman in the class.
And when I learned what happened to her, that story changed my focus. Because, you know, when people hear this kind of story, they always focus on punishing those men, punishing those criminals—but what matters for me most is building conditions where someone like her could someday still have a chance at intimacy on her own terms, and finally be free from fear. That's more important for me. I may never meet her, but I know who I am and what I'm here to build. I have been building an infrastructure –– not just “safe space” as a slogan, but an “ecospace” designed to make survival and growth possible. So that's why I believe that a well-governed space is what matters for communities now.
DG: Why is it so important for sexual minorities to have forums where they can communicate in that way? When it was just the bulletin boards, before social media, what worked really well and what didn’t work well?
SHIN: That’s a wonderful question. Okay, the bulletin boards I used before, the registration process doesn't require a lot of information. You just need email.
What I miss about bulletin boards is the sense of structure. You didn’t enter a personalized feed—you entered a place with visible rooms and topics. Even in the spaces you visited daily, you’d encounter views you didn’t like, and you had to live with that—and learn how to argue, or leave, or build something parallel. In some boards, moderators were community-chosen, which created a practical kind of participation—not perfect democracy, but civic practice.
You have to provide the information of which school you are in, because it's based on school. But it's not that difficult to use that. And also they have some kinds of logistics, like you log into different boards with different topics, and you can see that there are huge topics along with several small topics. So when you log into that, you can sense and feel the whole structure of that community. It's not a personal feed bombing you with everything you like. So you know, even in the board you’re most likely to visit every day, you will definitely encounter some speech you don't like, and you argue with them, you fight with them, or build something parallel, that's the civic foundation of democracy. You experience the everyday practice of civic democracy. People can vote for moderators or even recall them.
DG: You mean, the community can ask them to leave the bulletin boards?
SHIN: No, they don't actually leave the bulletin board. It's more that the moderator no longer has the right to perform administrative tasks, but they can still be part of the community, and ordinary users can vote in the election for this.
DG: Okay, and then what were the shortcomings of the bulletin boards?
SHIN: Yeah, it’s brutal. Really brutal. And I’ve seen people literally organize to push others out. I didn’t expect this to turn into story time, but I actually love this. So—back in Taiwan, we had this big BBS forum called PTT. There was a board called the ‘Sex’ board, where people could talk about sexual topics and share sexual health info. But around 2010, the space was dominated by mainstream straight cis men. And whenever a woman or a sexual minority posted anything, they often got harassed or attacked. So, women created another board inside the forum—basically a separate space—called ‘Feminine Sex.’ And from then on, the original Sex board and the Feminine Sex board were in conflict all the time. And honestly, if this happened today on Facebook, Threads, or X… we’d just block each other. Easy. Clean. Done.
But the problem is: when blocking becomes the default, we don’t really learn how to argue well, how to organize our reasons, or even how to sit with discomfort and understand why the other side thinks the way they do. We lose that practice—because it’s just so easy to delete people from our world now. I’m not saying blocking is always wrong. But there’s a trade-off.
DG: I get that. Then when Facebook and the other social media platforms that followed came along and the users migrated over to the commercial services, what was lost?
SHIN: What was lost? I think our behavior got shaped—personal branding became the default setting for joining an online community. If you don't do it, like me, you basically don't exist. Influence can be shaped by the number of social media followers; people define each other based on this. Choosing not to obey the logic of mainstream platforms means being unseen, and being unseen means having no influence.
And sure, personal branding can be useful—but I don’t believe it’s the only way to express yourself or connect with a community. The problem is, on mainstream platforms, the whole system is built for visibility. So clout becomes the game. Look at what they push: stories, reels, short-form visuals. And as a former product manager, I can tell you—this is not accidental. It’s designed. It’s designed around human nature: to avoid friction as much as possible. So they keep you scrolling, to make reacting effortless. One tap and you’ve sent a smiley face. Engagement becomes easier… but also cheaper.
And the scary part is, people start thinking that’s the whole internet. It’s not. But the more we get trained by these interfaces, the harder it becomes to even imagine other ways of building community. It is becoming more difficult for people to imagine that the "right" amount of friction can actually help us to grow, and coexist with the diversity.
DG: So did you find that there were certain things you couldn't talk about on Facebook or on the other social media platforms because they were sexual, because sexual speech was not as welcome as it was earlier?
SHIN: Yes, when I first started building my community, I knew nothing about technology. Like everyone else, I just created a fan page on Facebook, which was then flagged and deleted. This happened. I think it still happens to this day. At first, I was so angry about it. I felt it was unjust. But every time I wrote to Facebook, they just said that I had violated the user terms. At first I was furious. But I don’t stop at anger. I dig deeper. I thought, “Why do you say I violated the user terms?”
I read the terms, compared policies across platforms and applications, and realized the pattern: All of the terms of use forbid adult or erotic content in fine print. Because these are profit-driven systems optimized to minimize legal and business risk. So, I don’t frame it as “evil platforms.” I frame it as incentives. Once I understood this, I realized that we should not only protest and ask those big tech platforms to “give” us a voice –– that's a good approach, but it shouldn't be the only one. I believe we should build our own community. That's why I started researching open-source software and building my own self-hosted community.
DG: Please talk a little bit more about what you're building, and how what you're building is consistent with your view of free expression.
SHIN: Sure. It’s a long process but the reason why I use open-source software is, for a person knowing nothing about technology, I can come to the open-source community and ask questions about it. It’s more reliable than building it myself.
And the second example is about how I designed Lezismore’s registration and community access, mostly through trial and error.
We don’t require any real-name or ID verification. In fact, you can register with just an email. But instead of “verifying people,” we redesigned the "space".
Lezismore is built as a two-layer structure. The main website is searchable, but it looks almost… boring on purpose—advocacy articles, writers’ posts, slow content. The truly active community space is inside that main site, and the entry point is not something you casually discover through search. Most people learn how to get in through word of mouth. We also block search engines, bots, and crawlers from the community area. So from day one, we gave up visibility on purpose—we traded reach for resilience.
Then there’s the onboarding. New users go through an “apprenticeship” period. You can’t immediately post, comment, or DM people. You first have to read, observe, and understand how the community works. We don’t even tell you exactly how long it takes—you just have to be patient. In the fast-content era, people constantly complain that this is “annoying” or “hard to use.” And yes, it is friction indeed.
But that friction buys something valuable: a space that can stay anonymous, inclusive, and high-trust—without being instantly overwhelmed by harassment or bad-faith users. It also means we don’t need to depend on Big Tech’s third-party verification APIs. With relatively low technical cost, we’re using governance design—not data collection—to balance inclusion and protection.
And honestly, as a platform owner, I have to be real about what users “actually” need. If this was truly “just terrible UX,” the site wouldn’t survive in today’s hyper-competitive platform environment. But Lezismore has been running for over a decade, and we still have tens of thousands of people quietly reading and interacting every month. This is one of the biggest tradeoffs in my governance design. In an attention economy, choosing low visibility is a bold decision, and maintaining it has a real cost.
On top of that, we rely on human, context-based moderation. We use posts, replies, and Q&A threads to actively teach community norms—why diversity and conflict exist, how to handle risk, and how to protect yourself. Users also share practical safety tips and real interaction experiences with each other. There are many more small mechanisms built into the system, but that’s the core logic.
And there’s one more layer: the legal environment. In Taiwan, the legal climate around sex and speech can create chilling effects for smaller platforms. Platform owners can be criminally liable in certain scenarios. That’s exactly why governance design matters—it’s how we keep lawful expression possible without over-collecting data.
DG: Ah, so you need to be careful. I’m curious whether you’ve had any examples of offline repression. Do you have any experiences with censorship or feeling like you didn’t have your full freedom of expression in your offline experiences? Any experiences that might inform what an ideal online community might look like?
SHIN: Yes—actually, most of my earliest experiences with repression were offline, and they shaped how I later understood the internet as an escape route.
Back when I was a high school student, I was already involved in student movements and gender-related advocacy. One very concrete example was dress codes. The school restricted what female students could wear, and students organized to push for change. At one point we even had a vote—something like 98% of students supported revising the policy. But when the issue entered the “official” system, the administration simply ignored it. They bypassed procedure, dismissed the consensus, and used authority to shut it down completely.
That was my first clear lesson about repression: it’s not always someone telling you “you’re forbidden to speak.” Sometimes it’s a system designed so that even if students, women, or sexual minorities spend enormous effort building agreement, once our voices enter the institution, they can be treated as if they don’t exist.
That’s why, in the early 2010s, online space became my breakthrough. This was still the blog era, before social platforms fully standardized everything, and even before “share” mechanisms were built into everyday activism. I started experimenting with things like blog-based petitions, and a lot of students joined. The internet became a way to bypass institutional gatekeeping.
In college, I saw another layer. There was serious sexism from people in authority—military-style discipline officers, some teachers, and administrators. When gender-related controversies happened on campus, the media sometimes showed up and reported in ways that were harmful: exposing people, sensationalizing stories, and ignoring the realities of sexual minority students. Meanwhile, the administration would shut down student demands with authority, and at the same time use incentives and pressure behind the scenes, especially around housing or “benefits”—so some student representatives were afraid to speak honestly in meetings.
And this was before livestreaming was a normal tool. But even then, I was already using audio-based live channels to connect students across campuses. Online networks became a lifeline for young advocates, especially those of us who didn’t “fit” the institution and needed each other to survive.
I came from a literature background. I had zero technical training at the beginning. But I’ve always been the kind of person who loves trying new technology. And I was lucky, because I was born in that strange window when the internet was rapidly expanding, but not yet fully swallowed by Big Tech. So, I grew up in this tension between nostalgia and innovation, and I kept pushing, resisting, and experimenting. I’ve experienced both sides of speech: how beautiful freedom can be, and how terrifying it can become.
DG: Going back to Lezismore, I’m curious: When you ask people to observe before they post, what are you hoping they learn about the community before they more actively participate in it?
SHIN: I hope people understand that this is a community rather than a dating app focused on results. The community needs people to support and nurture each other. Some people see us as a dating app and expect a frictionless experience; naturally, they are disappointed. If you're only looking for a fast-food relationship, that's fine. Here, however, it is a community that offers more than just hooking up. The design focuses on words and a person’s behavioral history rather than just a photo. Dopamine bombing is not how we do things here.
We’ve also built a library of community safety notes, FAQs, and governance reminders over time. Some written by the team, some contributed by members. Not everyone reads them, and that’s fine. But the design makes it easier for people who want a slower, more intentional space to stay—and for people who want something frictionless to self-select out.
SHIN: I run the platform anonymously by design. People may know that there’s an admin called “Shin”, but I don’t associate a face or personal brand with the role because I don’t want the community to depend on my visibility for their trust.
We maintain a clear distinction between work and private life. Admin power is never a shortcut to social capital. In a sex-positive space, this boundary is a matter of ethics. The moment a founder’s identity becomes central, the space starts to orbit that person, and expectations, fan-service dynamics and power asymmetries creep in. Then speech becomes performance.
It also means I’m less “marketable” to attention-driven media—but that tradeoff protects the community’s integrity. Some media outlets only want a face and a persona. However, I accept this cost because I am trying to build a community that can thrive independently of an idol, where people relate to each other through behavior and shared norms, not proximity to the founder.
DG: It sounds like a lot of what you’re doing is about people being authentic on the site, not using personas or using it to create a personal platform for themselves for marketing purposes.
SHIN: Exactly, people can share links, but if a post is purely self-promotion with no contribution to the community, we don’t encourage this. I hope people here can respect the reciprocity.
DG: I want to shift a bit and talk about freedom of expression as a principle for a while. Do you think freedom of expression should be regulated by governments?
SHIN: Speech regulation is hard, because speech is freaking messy. And once you turn messy human speech into rules that scale, nuance gets flattened. Minority communities usually pay first, because large systems choose efficiency over lived reality.
I also don’t think the answer is “erase all conflict.” Some friction is the price of pluralism, and with good guidance and interface design, conflict can become a point of learning instead of a point of collapse. From a platform owner’s perspective, legal liability is real and often cruel. So if we expect platforms to be free, frictionless, allow everything we like, erase everything we dislike, and still amplify our visibility—then we’re really asking for magic. That’s why we need to talk seriously about alternatives and procedural safeguards, not just louder demands.
Age verification is a good example. I get that the goal is to protect minors. But identity-based age gates often turn into identity infrastructure. They chill lawful adult speech, concentrate gatekeeping power, and push everyone to hand over personal data just to access legal content. From my experience, there are other tools that can reduce harm with less damage—things like community design, visibility gating, and human, context-based moderation. Those approaches can protect people without building a personal-data checkpoint for everyone.
DG: You talked about minority voices, and minority speech. Are you concerned that any regulation will end up trying to silence minority speakers, or won’t benefit minority speakers. How are these speakers more vulnerable to speech regulations than others?
SHIN: Hmmm......a lot of minority speech is context-heavy. The same words can be support, education, or harassment depending on who says it and why. When regulation turns into broad categories, sexual health education, self-explore experiment sharing, trans healthcare discussions, or reclaimed language can be treated as “harmful” out of context (at both sides). So the risk isn’t only censorship, it’s misclassification at scale.
DG: Are there certain types of speech that don’t deserve the conversation. Some people might say that hate speech or speech that’s dehumanizing doesn’t deserve the conversation. Are there any categories of speech that you would say we shouldn’t consider, or do we get to talk about everything?
SHIN: Okay, I don't think the issue is about saying certain kinds of speech don't deserve to be discussed; the problem lies in the definition. As soon as we suggest that some speech doesn't merit discussion, some people will exploit this to silence their opponents. Whether it's right-wing, left-wing or anything else, if we say that we don't allow any kind of hate speech, the next thing someone will do is define your speech as hate speech. It's an endless war that draws us all into an eagerness to silence others and grab the mic, instead of creating more space for conversations and learning from each other.
We should go further than just regulation and create spaces where people can coexist in a grey area, endure some discomfort and engage with each other. I prefer this approach to trying to draw lines.
DG: So even well-intentioned restrictions might always be used against minority speakers?
SHIN: I wouldn’t say restriction is not good. There always has to be some kind of restriction, but people will always find a way to overcome or take advantage of it. So, the thing I believe is that regulation is regulation, but community should be an open-source archive. How we govern community, how we dialogue between each other when we disagree with each other…how can we create a space where those things can exist? I believe that those things should be open source. People always talk about open source like it’s just coding, but I believe governance should be open source too.
DG: So when you said before some restrictions are necessary but then we talk about open source governance, we’re talking about the same thing. When you say some restrictions are necessary, you’re not necessarily saying government restrictions, but that restrictions should come from somewhere else: that’s an open source governance model?
SHIN: Yes. And it should include restrictions in law, and how people deal with it, the way we deal with it. I’m not saying every rule or detection signal should be public. By “open-source governance,” I mean shareable governance playbooks: proportional steps, appeals templates, community norms, and design patterns that small communities can adapt. The goal is portability and adaptability of methods, not making systems easy to game. Because malice is always part of the environment.
DG: Is there anything else you want to say about your theory of open-source governance or what it means to you?
SHIN: I noticed there was a question in another interview about fostering transparency in social media, and how to appeal, and that the reason [for a takedown] should be more transparent. The interesting thing is that before our interview today I was joining a law and technology policy research group, and they’re reading a book called “Law and Technology: A Methodical Approach”. It's worth mentioning that it's very interesting. Apparently, scientists tend to place emphasis on complexity, which often trips up pragmatic reform efforts, so the recommendations often only call for greater transparency or participation.
I think this echoes what we were talking about before and the transparency thing. I heard this podcast in Taiwan about cybersecurity where they interview an outsourced ex-moderator from Meta and how the platform moderates speech. Because most of the information is confidential, the moderator can’t say too much, but she told us that every day Meta provided a whole set of lists with things they should ban, and every day it changes. Sometimes it even changes on an hourly basis. And they can never really put those fully transparent to the world. The reason they can’t do that is because those words are partially forbidding scams, because the scale is too big. So, when they show the transparency of how they ban things, the scammers will use this against them. Like, “now you’ve banned this word so I’ll just use another one.” It’s an endless war. So, I think transparency matters, but it shouldn’t be the only thing we think about, we should think about governance as well. And when we talk about governance, we shouldn’t just think about some high authority in government or a law just forcing the platform into something we like. We should go back and think about what we can do. We’ve got lots of open-source software now and we can literally build those things by ourselves. That’s what I’m trying to say.
DG: Okay, one last question. This is the last question we ask everybody. Who’s your free speech hero?
SHIN: This is the question I saw everyone answering, and I honestly struggled with it. Because I’m Taiwanese, and the names that often come up in U.S. free speech conversations aren’t the names I’m familiar with. I’m sorry about this.
DG: That’s okay, it doesn’t have to be a perfect answer.
SHIN: If you want a public figure from Taiwan, I think of the journalists and dissidents who pushed for press freedom during Taiwan’s democratization—Nylon (Tēnn Lâm-iông) is one name many Taiwanese recognize.
If I answer this as truthfully as I can, my hero is my family. My father taught me that integrity is not a slogan. It’s the ability to keep your ethics when it costs you something. My mother is the opposite kind of teacher: she’s relentless in a practical way: she doesn’t easily back down, and she keeps finding room to move even when the room is small. Put together, that’s what free expression means to me. It’s not “I can say anything.” It's about whether you can continue to think independently and live with integrity through layers of fear, pressure, temptation and coercion, while still moving forward and creating more possibilities for others.
Nitrous oxide, a product of fertilizer use, may harm some soil bacteria
Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.
While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.
Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.
The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.
“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”
A toxic gas
Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.
“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”
While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.
They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.
Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.
Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.
“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.
“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”
From the lab to the farm
In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.
Wasson calls the paper a proof of concept and plans to study agricultural soil next.
“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”
McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.
“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”
The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).
Manipulating AI Summarization Features
Microsoft is reporting:
Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….
These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated...
