Feed aggregator
Ping pong bot returns shots with high-speed precision
MIT engineers are getting in on the robotic ping pong game with a powerful, lightweight design that returns shots with high-speed precision.
The new table tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping pong table and wields a standard ping pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types — loop, drive, or chop — to precisely hit the ball to a desired location on the table with various types of spin.
In tests, the engineers threw 150 balls at the robot, one after the other, from across the ping pong table. The bot successfully returned the balls with a hit rate of about 88 percent across all three swing types. The robot’s strike speed approaches the top return speeds of human players and is faster than that of other robotic table tennis designs.
Now, the team is looking to increase the robot’s playing radius so that it can return a wider variety of shots. Then, they envision the setup could be a viable competitor in the growing field of smart robotic training systems.
Beyond the game, the team says the table tennis tech could be adapted to improve the speed and responsiveness of humanoid robots, particularly for search-and-rescue scenarios, and situations in a which a robot would need to quickly react or anticipate.
“The problems that we’re solving, specifically related to intercepting objects really quickly and precisely, could potentially be useful in scenarios where a robot has to carry out dynamic maneuvers and plan where its end effector will meet an object, in real-time,” says MIT graduate student David Nguyen.
Nguyen is a co-author of the new study, along with MIT graduate student Kendrick Cancio and Sangbae Kim, associate professor of mechanical engineering and head of the MIT Biomimetics Robotics Lab. The researchers will present the results of those experiments in a paper at the IEEE International Conference on Robotics and Automation (ICRA) this month.
Precise play
Building robots to play ping pong is a challenge that researchers have taken up since the 1980s. The problem requires a unique combination of technologies, including high-speed machine vision, fast and nimble motors and actuators, precise manipulator control, and accurate, real-time prediction, as well as higher-level planning of game strategy.
“If you think of the spectrum of control problems in robotics, we have on one end manipulation, which is usually slow and very precise, such as picking up an object and making sure you’re grasping it well. On the other end, you have locomotion, which is about being dynamic and adapting to perturbations in your system,” Nguyen explains. “Ping pong sits in between those. You’re still doing manipulation, in that you have to be precise in hitting the ball, but you have to hit it within 300 milliseconds. So, it balances similar problems of dynamic locomotion and precise manipulation.”
Ping pong robots have come a long way since the 1980s, most recently with designs by Omron and Google DeepMind that employ artificial intelligence techniques to “learn” from previous ping pong data, to improve a robot’s performance against an increasing variety of strokes and shots. These designs have been shown to be fast and precise enough to rally with intermediate human players.
“These are really specialized robots designed to play ping pong,” Cancio says. “With our robot, we are exploring how the techniques used in playing ping pong could translate to a more generalized system, like a humanoid or anthropomorphic robot that can do many different, useful things.”
Game control
For their new design, the researchers modified a lightweight, high-power robotic arm that Kim’s lab developed as part of the MIT Humanoid — a bipedal, two-armed robot that is about the size of a small child. The group is using the robot to test various dynamic maneuvers, including navigating uneven and varying terrain as well as jumping, running, and doing backflips, with the aim of one day deploying such robots for search-and-rescue operations.
Each of the humanoid’s arms has four joints, or degrees of freedom, which are each controlled by an electrical motor. Cancio, Nguyen, and Kim built a similar robotic arm, which they adapted for ping pong by adding an additional degree of freedom in the wrist to allow for control of a paddle.
The team fixed the robotic arm to a table at one end of a standard ping pong table and set up high-speed motion capture cameras around the table to track balls that are bounced at the robot. They also developed optimal control algorithms that predict, based on the principles of math and physics, what speed and paddle orientation the arm should execute to hit an incoming ball with a particular type of swing: loop (or topspin), drive (straight-on), or chop (backspin).
They implemented the algorithms using three computers that simultaneously processed camera images, estimated a ball’s real-time state, and translated these estimations to commands for the robot’s motors to quickly react and take a swing.
After consecutively bouncing 150 balls at the arm, they found the robot’s hit rate, or accuracy of returning the ball, was about the same for all three types of swings: 88.4 percent for loop strikes, 89.2 percent for chops, and 87.5 percent for drives. They have since tuned the robot’s reaction time and found the arm hits balls faster than existing systems, at velocities of 20 meters per second.
In their paper, the team reports that the robot’s strike speed, or the speed at which the paddle hits the ball, is on average 11 meters per second. Advanced human players have been known to return balls at speeds of between 21 to 25 meters second. Since writing up the results of their initial experiments, the researchers have further tweaked the system, and have recorded strike speeds of up to 19 meters per second (about 42 miles per hour).
“Some of the goal of this project is to say we can reach the same level of athleticism that people have,” Nguyen says. “And in terms of strike speed, we’re getting really, really close.”
Their follow-up work has also enabled the robot to aim. The team incorporated control algorithms into the system that predict not only how but where to hit an incoming ball. With its latest iteration, the researchers can set a target location on the table, and the robot will hit a ball to that same location.
Because it is fixed to the table, the robot has limited mobility and reach, and can mostly return balls that arrive within a crescent-shaped area around the midline of the table. In the future, the engineers plan to rig the bot on a gantry or wheeled platform, enabling it to cover more of the table and return a wider variety of shots.
“A big thing about table tennis is predicting the spin and trajectory of the ball, given how your opponent hit it, which is information that an automatic ball launcher won’t give you,” Cancio says. “A robot like this could mimic the maneuvers that an opponent would do in a game environment, in a way that helps humans play and improve.”
This research is supported, in part, by the Robotics and AI Institute.
System lets robots identify an object’s properties through handling
A human clearing junk out of an attic can often guess the contents of a box simply by picking it up and giving it a shake, without the need to see what’s inside. Researchers from MIT, Amazon Robotics, and the University of British Columbia have taught robots to do something similar.
They developed a technique that enables robots to use only internal sensors to learn about an object’s weight, softness, or contents by picking it up and gently shaking it. With their method, which does not require external measurement tools or cameras, the robot can accurately guess parameters like an object’s mass in a matter of seconds.
This low-cost technique could be especially useful in applications where cameras might be less effective, such as sorting objects in a dark basement or clearing rubble inside a building that partially collapsed after an earthquake.
Key to their approach is a simulation process that incorporates models of the robot and the object to rapidly identify characteristics of that object as the robot interacts with it.
The researchers’ technique is as good at guessing an object’s mass as some more complex and expensive methods that incorporate computer vision. In addition, their data-efficient approach is robust enough to handle many types of unseen scenarios.
“This idea is general, and I believe we are just scratching the surface of what a robot can learn in this way. My dream would be to have robots go out into the world, touch things and move things in their environments, and figure out the properties of everything they interact with on their own,” says Peter Yichen Chen, an MIT postdoc and lead author of a paper on this technique.
His coauthors include fellow MIT postdoc Chao Liu; Pingchuan Ma PhD ’25; Jack Eastman MEng ’24; Dylan Randle and Yuri Ivanov of Amazon Robotics; MIT professors of electrical engineering and computer science Daniela Rus, who leads MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and Wojciech Matusik, who leads the Computational Design and Fabrication Group within CSAIL. The research will be presented at the International Conference on Robotics and Automation.
Sensing signals
The researchers’ method leverages proprioception, which is a human or robot’s ability to sense its movement or position in space.
For instance, a human who lifts a dumbbell at the gym can sense the weight of that dumbbell in their wrist and bicep, even though they are holding the dumbbell in their hand. In the same way, a robot can “feel” the heaviness of an object through the multiple joints in its arm.
“A human doesn’t have super-accurate measurements of the joint angles in our fingers or the precise amount of torque we are applying to an object, but a robot does. We take advantage of these abilities,” Liu says.
As the robot lifts an object, the researchers’ system gathers signals from the robot’s joint encoders, which are sensors that detect the rotational position and speed of its joints during movement.
Most robots have joint encoders within the motors that drive their moveable parts, Liu adds. This makes their technique more cost-effective than some approaches because it doesn’t need extra components like tactile sensors or vision-tracking systems.
To estimate an object’s properties during robot-object interactions, their system relies on two models: one that simulates the robot and its motion and one that simulates the dynamics of the object.
“Having an accurate digital twin of the real-world is really important for the success of our method,” Chen adds.
Their algorithm “watches” the robot and object move during a physical interaction and uses joint encoder data to work backward and identify the properties of the object.
For instance, a heavier object will move slower than a light one if the robot applies the same amount of force.
Differentiable simulations
They utilize a technique called differentiable simulation, which allows the algorithm to predict how small changes in an object’s properties, like mass or softness, impact the robot’s ending joint position. The researchers built their simulations using NVIDIA’s Warp library, an open-source developer tool that supports differentiable simulations.
Once the differentiable simulation matches up with the robot’s real movements, the system has identified the correct property. The algorithm can do this in a matter of seconds and only needs to see one real-world trajectory of the robot in motion to perform the calculations.
“Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify,” Liu says.
The researchers used their method to learn the mass and softness of an object, but their technique could also determine properties like moment of inertia or the viscosity of a fluid inside a container.
Plus, because their algorithm does not need an extensive dataset for training like some methods that rely on computer vision or external sensors, it would not be as susceptible to failure when faced with unseen environments or new objects.
In the future, the researchers want to try combining their method with computer vision to create a multimodal sensing technique that is even more powerful.
“This work is not trying to replace computer vision. Both methods have their pros and cons. But here we have shown that without a camera we can already figure out some of these properties,” Chen says.
They also want to explore applications with more complicated robotic systems, like soft robots, and more complex objects, including sloshing liquids or granular media like sand.
In the long run, they hope to apply this technique to improve robot learning, enabling future robots to quickly develop new manipulation skills and adapt to changes in their environments.
“Determining the physical properties of objects from data has long been a challenge in robotics, particularly when only limited or noisy measurements are available. This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools,” says Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved with this research.
This work is funded, in part, by Amazon and the GIST-CSAIL Research Program.
The FCC Must Reject Efforts to Lock Up Public Airwaves
President Trump’s attack on public broadcasting has attracted plenty of deserved attention, but there’s a far more technical, far more insidious policy change in the offing—one that will take away Americans’ right to unencumbered access to our publicly owned airwaves.
The FCC is quietly contemplating a fundamental restructuring of all broadcasting in the United States, via a new DRM-based standard for digital television equipment, enforced by a private “security authority” with control over licensing, encryption, and compliance. This move is confusingly called the “ATSC Transition” (ATSC is the digital TV standard the US switched to in 2009 – the “transition” here is to ATSC 3.0, a new version with built-in DRM).
The “ATSC Transition” is championed by the National Association of Broadcasters, who want to effectively privatize the public airwaves, allowing broadcasters to encrypt over-the-air programming, meaning that you will only be able to receive those encrypted shows if you buy a new TV with built-in DRM keys. It’s a tax on American TV viewers, forcing you to buy a new TV so you can continue to access a public resource you already own.
This may not strike you as a big deal. Lots of us have given up on broadcast and get all our TV over the internet. But millions of American still rely heavily or exclusively on broadcast television for everything from news to education to simple entertainment. Many of these viewers live in rural or tribal areas, and/or are low-income households who can least afford to “upgrade.” Historically, these viewers have been able to rely on access to broadcast because, by law, broadcasters get extremely valuable spectrum licenses in exchange for making their programming available for free to anyone within range of their broadcast antennas.
If broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them
Adding DRM to over-the-air broadcasts upends this system. The “ATSC Transition” is a really a transition from the century-old system of universally accessible programming to a privately controlled web of proprietary technological restrictions. It’s a transition from a system where anyone can come up with innovative new TV hardware to one where a centralized, unaccountable private authority gets a veto right over new devices.
DRM licensing schemes like this are innovation killers. Prime example: DVDs and DVD players, which have been subject to a similar central authority, and haven’t gotten a single new feature since the DVD player was introduced in 1995.
DRM is also incompatible with fundamental limits on copyright, like fair use. Those limits let you do things like record a daytime baseball game and then watch it after dinner, skipping the ads. Broadcasters would like to prevent that and DRM helps them do it. Keep in mind that bypassing or breaking a DRM system’s digital keys—even for lawful purposes like time-shifting, ad-skipping, security research, and so on—risks penalties under Section 1201 of the Digital Millennium Copyright Act. That is, unless you have the time and resources to beg the Copyright Office for an exemption (and, if the exemption is granted, to renew your plea every three years).
Broadcasters say they need this change to offer viewers new interactive features that will serve the public interest. But if broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them. The most reliable indicator that a new feature is cool and desirable is that people voluntarily install it. If the only way to get someone to use a new feature is to lock up the keys so they can’t turn it off, that’s a clear sign that the feature is not in the public interest.
That's why EFF joined Public Knowledge, Consumer Reports and others in urging the FCC to reject this terrible, horrible, no good, very bad idea and keep our airwaves free for all of us. We hope the agency listens, and puts the interests of millions of Americans above the private interests of a few powerful media cartels.
Appeals Court Sidesteps The Big Questions on Geofence Warrants
Another federal appeals court has ruled on controversial geofence warrants—sort of. Last week, the US Court of Appeals for the Fourth Circuit sitting en banc issued a single sentence opinion affirming the lower court opinion in United States v. Chatrie. The practical outcome of this sentence is clear: the evidence collected from a geofence warrant issued to Google can be used against the defendant in this case. But that is largely where the clarity ends, because the fifteen judges of the Fourth Circuit who heard the en banc appeal agreed on little else. The judges wrote a total of nine separate opinions, no single one of which received a majority of votes. Amid this fracture, the judges essentially deadlocked on important constitutional questions about whether geofence warrants are a Fourth Amendment search. As a result, the new opinion in Chatrie is a missed opportunity for the Fourth Circuit to join both other appellate courts to have considered the issue in finding geofence warrants unconstitutional.
Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area and time period both specified by law enforcement. This creates a high risk of suspicion falling on innocent people and can reveal sensitive and private information about where individuals have traveled in the past. Following intense scrutiny from the press and the public, Google announced changes to how it stores location data in late 2023, apparently with the effect of eventually making it impossible for the company to respond to geofence warrants.
Regardless, numerous criminal cases involving geofence evidence continue to make their way through the courts. The district court decision in Chatrie was one of the first, and it set an important precedent in finding the warrant overbroad and unconstitutional. However, the court allowed the government to use the evidence it obtained because it relied on the warrant in “good faith.” On appeal, a three judge panel of the Fourth Circuit voted 2-1 that the geofence warrant did not constitute a search at all. Later, the appeals court agreed to rehear the case en banc, in front of all active judges in the circuit. (EFF filed amicus briefs at both the panel and en banc stages of the appeal).
The only agreement among the fifteen judges who reheard the case was that the evidence should be allowed in, with at least eight relying on the good faith analysis. Meanwhile, seven judges argued that geofence warrants constitute a Fourth Amendment search in at least some fashion, while exactly seven disagreed. Although that means the appellate court did not rule on the Fourth Amendment implications of geofence warrants, neither did it vacate the lower court’s solid constitutional analysis.
Above all, it remains the case that every appellate court to rule on geofence warrants to date has found serious constitutional defects. As we explain in every brief we file in these cases, reverse warrants like these are very sort of “general searches” that the authors of the Fourth Amendment sought to
Dopamine signals when a fear can be forgotten
Dangers come but dangers also go, and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.
“Dopamine is essential to initiate fear extinction,” says Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author Susumu Tonegawa, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics within The Picower Institute for Learning and Memory at MIT, and a Howard Hughes Medical Institute (HHMI) investigator.
In 2020, Tonegawa’s lab showed that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably, those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.
In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the Proceedings of the National Academy of Sciences show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).
“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” says Zhang, who also led the 2020 study and is now a senior associate at Orbimed, a health care investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it — it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders, like PTSD.”
Forgetting fear
The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.
The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.
Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On Day One, they went to an enclosure where they experienced three mild shocks on the feet. On Day Two, they went back to the enclosure for 45 minutes, where they didn’t experience any new shocks — at first, the mice froze in anticipation of a shock, but then relaxed after about 15 minutes. On Day Three they returned again to test whether they had indeed extinguished the fear they showed at the beginning of Day Two.
The dopamine activity tracking revealed that during the shocks on Day One, Rspo2 neurons had the larger response to dopamine, but in the early moments of Day Two, when the anticipated shocks didn’t come and the mice eased up on freezing, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.
Causal connections
The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.
The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, over-expressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.
“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.
The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.
But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola says.
“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he says. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach, such as pBLA and use of dopaminergic modulation.”
Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the HHMI, the Freedom Together Foundation, and The Picower Institute.
Chinese AI Submersible
A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”
“Research rockets.” Sure.
...Meet the 4 influencers shaping Chris Wright’s worldview
Judge asks lawyers to assess Trump’s order targeting state climate cases
Oregon lawmakers ready to junk contentious wildfire map
Top House appropriator backs disaster program killed by Trump
Canada’s climate leader goes to Washington
Clean-tech firm that turns CO2 into rock secures new funding
Credits tied to shutting Asia coal plants early win backing
Clear plans needed to deploy climate adaptation funds, UN says
Britain’s green tax collection falls to record low
Podcast Episode: Digital Autonomy for Bodily Autonomy
We all leave digital trails as we navigate the internet – records of what we searched for, what we bought, who we talked to, where we went or want to go in the real world – and those trails usually are owned by the big corporations behind the platforms we use. But what if we valued our digital autonomy the way that we do our bodily autonomy? What if we reclaimed the right to go, read, see, do and be what we wish online as we try to do offline? Moreover, what if we saw digital autonomy and bodily autonomy as two sides of the same coin – inseparable?
%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F0ffeccaf-2933-474a-87b2-2cae932ab88d%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E
Privacy info.
This embed will serve content from simplecast.com
(You can also find this episode on the Internet Archive and on YouTube.)
Kate Bertash wants that digital autonomy for all of us, and she pursues it in many different ways – from teaching abortion providers and activists how to protect themselves online, to helping people stymie the myriad surveillance technologies that watch and follow us in our communities. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how creativity and community can align to center people in the digital world and make us freer both online and offline.
In this episode you’ll learn about:
- Why it’s important for local communities to collaboratively discuss and decide whether and how much they want to be surveilled
- How the digital era has blurred the bright line between public and private spaces
- Why we can’t surveil ourselves to safety
- How DefCon – America's biggest hacker conference – embodies the ideal that we don’t have to simply accept technology as it’s given to us, but instead can break, tinker with, and rebuild it to meet our needs
- Why building community helps us move beyond hopelessness to build and disseminate technology that helps protects everyone’s privacy
Kate Bertash works at the intersection of tech, privacy, art, and organizing. She directs the Digital Defense Fund, launched in 2017 to meet the abortion rights and bodily autonomy movements’ increased need for security and technology resources after the 2016 election. This multidisciplinary team of organizers, engineers, designers, abortion fund and practical support volunteers provides digital security evaluations, conducts staff training, maintains a library of go-to resources on reproductive justice and digital privacy, and builds software for abortion access, bodily autonomy, and pro-democracy organizations. Bertash also engages in various multidisciplinary civic tech projects as a project manager, volunteer, activist, and artist; she’s especially interested in ways that artistic methods can interrogate use of AI-driven computer vision, other analytical technologies in surveillance, and related intersections with our civil rights.
Resources:
- Digital Defense Fund and its 2022 EFF Award
- Dobbs v. Jackson Women’s Health Organization (U.S. Supreme Court No. 19–1392, decided June 24, 2022)
- EFF: Two Years Post-Roe: A Better Understanding of Digital Threats
- EFF: Surveillance Self-Defense
- DEF CON
What do you think of “How to Fix the Internet?” Share your feedback here.
KATE BERTASH: It is me, having my experience, like walking through these spaces, and so much of that privacy, right, should, like, treat me as if my digital autonomy in this space is as important as my bodily autonomy in the world.
I think it's totally possible. I have such amazing optimism for the idea of reclaiming our digital autonomy and understanding that it is like the you that moves through the world in this way, rather than just some like shoddy facsimile or some, like, shadow of you.
CINDY COHN: That’s Kate Bertash speaking about how the world will be better when we recognize that our digital selves and our physical selves are the same, and that reclaiming our digital autonomy is a necessary part of reclaiming our bodily autonomy. And that’s especially true for the people she focuses on helping, people who are seeking reproductive assistance.
I’m Cindy Cohn, the executive director of the Electronic Frontier Foundation.
JASON KELLEY: And I’m Jason Kelley – EFF’s Activism Director. This is our podcast series How to Fix the Internet.
CINDY COHN: The idea behind this show is that we're trying to make our digital lives BETTER. Now a big part of our job at EFF is to envision the ways things can go wrong online-- and jumping into the action to help when things then DO go wrong.
But this show is about optimism, hope and solutions – we want to share visions of what it looks like when we get it right.
JASON KELLEY: Our guest today is someone who has been tirelessly fighting for the safety and privacy of a very vulnerable group of people for many years – and she does so with compassion, creativity and joy.
CINDY COHN: Kate Bertash is a major force in the world of digital privacy and security. Her work with the Digital Defense Fund started in 2017 as a resource to help serve the digital security needs of people seeking abortions and other reproductive care, and they have \ expanded their purview to include trans rights, elections integrity, harm reduction and other areas that are crucial to an equitable and functional democratic society. She’s also an artist, with a clothing line called Adversarial Fashion. She designs clothes that do all sorts of deliciously sneaky things – like triggering automatic license plate readers, or injecting junk data into invasive state and corporate monitoring systems. We’re just delighted to have her with us today - welcome Kate!
KATE BERTASH: Thank you so much for having me on. What an introduction.
CINDY COHN: Well, let's start with your day job, privacy and reproductive rights. You've been doing this since long before it became, you know, such a national crisis. Tell us about the Digital Defense Fund.
KATE BERTASH: So after Donald Trump was elected in 2016, I had started running some, what I would call tech volunteering events, the most well known of which is the Abortion Access Hackathon in San Francisco, we had about 700 people apply to come and hundreds of people over the weekend who basically were able to help people with very functional requests.
So we went to different organizations in the area and worked to ensure that they could get help with, you know, turning a spreadsheet into a database or getting help working on open source that they use for case management, or fixing something that was broken in their sales force. So, very functional stuff.
And then I was approached after that and asked if I wanted to run this new fund, the Digital Defense Fund. So we spent the first couple years kind of figuring out what the fund was going to do, but sort of organically and learning basically from the people that we serve and the organizations that work at Abortion Access, we now have this model where we can provide hands-on, totally free digital security and privacy support to organizations working in the field.
We provide everything from digital security evaluations to trainings. We do a lot of project management, connecting folks with different kinds of vendor software, community support, a lot of professional development.
And I think probably the best part is we also get to help them fund those improvements. So I know we always talk a lot about how things can improve, but I think kind of seeing it through, uh, and getting to watch people actually, you know, install things and turn them on and learn how to be their own experts has been a really incredible experience. So I can't believe that was eight years ago.
JASON KELLEY: You know a lot has changed in eight years, we had the Dobbs decision, um, that happened under the Biden administration, and now we've got the Dobbs decision, under a Trump administration. I assume that, you know, your work has changed a lot. Like at EFF we've been doing some work, with the Repro Uncensored Coalition tracking the changes in take downs of abortion related content. And that is a hard thing to do just for, you know, all the reasons, um, that it, you know, tracking what systems take down is sort of a thing you have to do one at a time and just put the data together. But for you, I mean, out of eight years, you know what's different now than, than maybe not 2017 or, but, but certainly, you know, 2022.
KATE BERTASH: I think this is a really excellent question just because I think it's kind of strange to look backwards and, and know that, uh, abortion access is a really interesting space in that for decades it's been under various kinds of different legal, and I would say ideological attacks as well as, you know, dealing with the kind of common problems of nonprofits, usually funding, often being targets of financial scams and crime as all nonprofits are.
But I think the biggest change has been that, um, a lot of folks who I think sort of. Could always lean on the idea that abortion would be federally legal, and so your job may be helping people get their abortions or performing abortions or supporting folks with funding to get to their procedures, that that always sort of had this like, color of law that would always kind of back you up or provide for you a certain level of security.
Um, now we kind of don't have that safety, mentally, even to lean on anymore as well as legally. And so a lot of the meat and potatoes of the work that we do, um, often it was always about, you know, ensuring patient privacy. But a lot of times now it's also ensuring that organizations are kind of ready to ask and answer kind of hard questions about how they wanna work. What data is at risk when so much is uncertain in a legal space?
Because I think, you know, I hardly have to tell anybody at EFF that, often, uh, we kind of don't know what, what quote unquote qualifies or what is legal under a particular new law or statute until somebody makes you prove it in court.
And I think a lot of our job at Digital Defense Fund really then crystallized into what we can do to help people sort of tolerate this level of uncertainty and ensure that your tools and that your tactics and your understanding even of the environment that you're operating in at least buoys you and is a source of certainty and safety when the world cannot be.
CINDY COHN: Oh, I think that's great. Do you have a, an example?
KATE BERTASH: Yes, absolutely. I think one of the biggest changes that I've seen in how people tend to work and operate is that, uh, I think you know, this kind of backs into many other topics that I know get discussed on this podcast, which is that when we reach into our pocket for the computer that is on us all day, you know, our phone and we reach out to text people, it's, it's a very accessible way to reach somebody and trying to really wrap around the understanding of the difference between sending an SMS text message to somebody, or responding to a text message asking about services that your organization provides or where to get an abortion or something like that, and the difference of how much information is kept, for example, by your cell phone carrier. Usually, you know, as all of you have taught all of us very well, uh, in plain text as far as we know forever.
Uh, and the absolute huge difference then of getting to really inform people about this sort of static understanding of our environment that we operate in, that we kind of take for granted every day, when we're just like texting our friends or, you know, getting a message about whether something's ready for pickup at the pharmacy. Uh, and then instead we get to help move people onto other tools, encrypted chat like Signal or Wire or whatever meets their needs, helping meet people where they're at on other platforms like WhatsApp, and to really not just like tell people these are the quote unquote correct tools to use, because certainly there are many great, uh, you know, all loads roads lead to Rome as they say.
But I think getting to improve people's sort of environmental understanding of the ocean that we're all swimming in, uh, that it actually doesn't have to work this way, but that these are also the results of systems that, are motivated by capital and how you make money off of data. And so I think trying to help people to be prepared then to make different decisions when they encounter new questions or new technologies has been a really, really big piece of it. And I love that it gets to start with something as simple as, you know, a safer place to have a sensitive conversation in a text message on your phone in a place like Signal. So, yeah.
CINDY COHN: Yeah, no, I think that makes such sense. And we've seen this, right? I mean, you know, we had a mother in Nebraska who went to jail because she used Facebook to communicate with her daughter, I believe about getting reproductive help. And the shifting to a just a different platform might've changed that story quite a bit because, you know, Facebook had this information and, you know, one of the things that, you know, we know as lawyers is that like when Facebook gets a subpoena or process asking for information about a user, the government doesn't have to tell them what the prosecution is for, right? So that, you know, it could be a bank robber or it could be a person seeking reproductive help. The company is not in a position to really know that. Now we've worked in a couple places to create situations in which if the company does happen to know for some reason they can resist.
But the way that the baseline legal system works means that we can't just, you know, uh, as much as I love to blame Facebook, we can't blame Facebook for this. We need actual tools that help protect people from the jump.
KATE BERTASH: Absolutely, and I think that case is a really important example of, especially I think, how unclear it is from platform to platform, sort of how that information is kept and used.
I think one of the really tragic things about that conversation was that it was a very loving conversation. It was the kind of experience I think that you would want to have between a parent and child to be able to be there for each other. And they were even to talking to each other while they were in the same house. So they were just sharing a conversation from one room to the next. And that's something that I think like, to see the reaction the public had to, that I think, was very affirming to me that, that it was wrong, uh, that, you know, that just the way that this platform is structured somehow then, uh, put this extra amount of risk on this family.
I think, because, you know, we can imagine that it should be a common experience or common right to just have a simple conversation within your household and to know that like that's in a safe place, that that's treated with the sensitivity that it deserves. And I think it helps us to understand that. You know, we are actually, and I mean this in a good sense of the word, entitled to that, and I know that seeing actually, uh, Meta respond to the sort of outcry, there was also a very, like, positive flag for me, because they don't typically respond to, uh, their, their comms department does not typically respond to any individual subpoena that they received, but they felt they had to come out and say why they responded and what the, the problem was there. Um, I think as sort of an indication that this is important.
These different kinds of cases that come up, especially around abortion and criminalization, one of the reasons I think they're so important for us to cover is that, you know, on this podcast or within the spaces that both you and I work with so much about digital security and privacy kind of exists in this very like cloudy, theoretical space.
Like we have these, like, ideals of what we know we want to be true and, and often, you know, when you, when you're talking to folks about like big data, it's literally so large that it can be hard to like pin it down and decide how you feel. But these cases, they provide these concrete examples of how you think the world actually should or should not work.
And it really nails it down and lets people form these very strong emotional responses to it. Um, that's why I'm so grateful that, um, you know, organizations like yours get to help us contextualize that like, yes, there's this like, really personal, uh, and, and tragic story – and it also takes place within this larger conversation around your digital civil liberties.
CINDY COHN: Yeah, so let's flip that around a little bit. I've heard you talk about this before, which is, what would the world look like if our technologies actually stood up for us in these contexts? And, you know, inside the home is a very particular one. And I think because the Fourth Amendment is really clear about the need for privacy. It's one of the places where privacy is actually in our constitution, but I think we're having a broader conversation, like what would the world look like if the tools protected us in these times?
KATE BERTASH: I think especially, it's really interesting to think about the, the problems that I know I've learned so much from your team around the, the problem of what is public and what is private. I think, you know, we always talk about abortion access as a right to privacy and then it suddenly exists in this space where we kind of really haven't decided what that means, and especially anything that's very fuzzy about that.
People are often very familiar with the image of the protestor outside of the abortion clinic. There are many of the same problems kind of wrapped up in the fact that protestors will often film or take photographs or write down the license plates of people who are going in and out of clinics, often for a variety of reasons, but mostly to surveil them in in some way that we actually see then from state actors or from corporations, this is done on a very personal basis.
And it has a lot of that same level of damage. And we frequently have had to capitulate that like, well, this is a public space, you know, people can take photos in, in a public area, and that information that is taken about your personal abortion experience is unfortunately, you know, can be used and, and misused in, in whatever way people want.
And then we watched that exact same problem map itself onto the online space. So yeah, very important to me.
CINDY COHN: I think this is one of the fundamental, things that the digital era brought us was an increasing recognition that this bright line between public spaces and private spaces isn't working.
And so we need a more, you know, it's not like there aren't public spaces online. I definitely want reporters to be able to, you know. Do investigations that give us information about people in power and, and what they're doing. Um, so it's not, it's, it's not either or, right, and I think that's the thing we have to have a more nuanced conversation about what public spaces. Are really not public in the context. You know, what we think of as Bright Line public spaces aren't really rightfully treated as public. And I love your reframing about this as being about us. It's about us and our lives.
KATE BERTASH: Absolutely. Uh, I think one of the larger kind of examples that has come up also as well, uh, is that your experience of seeking out medical care actually then travels into the domain of, of the doctor that you see they often use in electronic health records system. And so you have this record of something that I don't think any of these companies were really quite adequately prepared for, for the policy eventuality that they would be holding information that would be an enshrined human right in some states’ constitutions, but a crime in a different state. And you know, you have these products like Epic Everywhere, and they allow access to that same information from a variety of places, including from a state where, you know, that, to that state, it is evidence of a crime to have this in the health record versus just it's, you know, a normal continuity of care in a different state.
And kind of seeing how, you know, we tend to have these sort of debates and understandings and trying to, like you say, examine the nuance and get to the bottom of how we wanna live in these different contexts of policy or in court cases. But then so much of it is held in this corporate space and I think they really are not. Ready for the fact that they are going to have to take a much more active role, I think, than they even want to, uh, in understanding how that shows up for us.
JASON KELLEY: Let’s take a quick moment to say thank you to our sponsor.
“How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also want to thank EFF members and donors. You’re the reason that we exist. You can become a member if you’re not for just $25 and for a little more you can get some great, very stylish gear. The more members we have, the more power we have - in statehouses, courthouses, and on the streets. EFF has been fighting for digital rights for decades, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate.
And now back to our conversation with Kate Bertash.
So we've been talking a lot about the skills and wisdom that you've learned during the fight for reproductive rights, but I know a lot of that can be used in other areas as well. And I heard recently that you live in a pretty small rural town, and not all your neighbours share your political views. But you've been building sort of a local movement to fight surveillance there – and I’d love to hear about how you are bringing together different people with different sort of political alignments to come together on this privacy issue.
KATE BERTASH: Yeah, it actually had started so many years ago with Dave Moss, who's on the EFF team and I having a conversation about the license plate surveillance actually at clinics and, and kind of how that's affected by the proliferation of automated license plate reader technology. And I had come up with this, this like line of clothing called Adversarial Fashion, which, uh, injects junk into automated license plate readers.
It was a really fun project. I was really happy to see the public response to it, but as a result, I sort of learned a lot about these systems and kind of became a bit of an activist on the privacy issues around them.
And then suddenly, I now live in a rural community in southwest Washington and I then suddenly found out on Facebook one day that our sheriff's department had purchased Flock automated license plate reader cameras, and just installed them already and just announced it. Like there was no public discussion, no debate, no nothing. There had been debate in neighboring counties where they decided, oh, kind of not for us. You know, where a lot of rural communities, uh, and, and like, I wanna give you a sense of the size. Our county has 12,000 people in it. My town has a thousand people in it. So very tiny, like, you kind of almost wonder why you would even need license plate for surveillance when you could just like literally ask almost anybody what's going on with, like, I've seen people before on, on Facebook where they're like, Hey, is this your car? You know, somebody stole it. Come pick it up. It's on our, on our hill.
CINDY COHN: I grew up in a very small town in Iowa and the saying in our town was, you know, you don't need turn signals 'cause everybody knows where you're going.
KATE BERTASH: I love that. See exactly like I did not know that about you, Cindy. I love that. And that was kind of this initiating, uh, event where I was just, I, I'll be honest with you, I totally hit the ceiling. What I found out I was, I was really mad because, you know, you are active on all this stuff outside of, you know, your work and your, you know, I've been all over the country talking about the problems with this technology and the privacy issues that it raises and you know, how tech companies take advantage of communities and here they were taking advantage of my community.
It's like, not in my house! How is it in my house?
JASON KELLEY: Well, when did this happen? When? When did they install these?
KATE BERTASH: Oh my gosh, it had to be a couple of months ago. I mean, it was very, very recently. Yeah, it was super recently, and so I kind of did what I know best, which is that I took everything that I learned, I put it into a presentation to my neighbors. I scheduled a bunch of nights at the different libraries and community centers in my county, and invited everybody to come, and the sheriff and the undersheriff came too.
And the most surprising thing about this was that I think, A, that people showed up. I was actually very pleasantly surprised. I think a lot of people, when they move to rural areas, they do so because, you know, they want to feel freer to be not, you know, watched every day by the state or by corporations, or even by their neighbors, frankly.
And so it was really surprising to me when, this is probably the most politically diverse room I've ever presented to. And definitely people that I think would absolutely not love any of my rest of my politics, but both nights, one hundred percent of the room was in agreement that they did not like these cameras, did not think that they were a good fit for our community, that they don't really appreciate, you know, not being asked.
I think that was kind of the core thing we wanted to get through is that even if you do decide these are a good fit. We should have been asked first, and I got people, shaking my hands afterwards. We're like, thank you young lady for bringing up this important issue.
Um, it's still ongoing. We haven't had all of them. Some of them have been removed, uh, but not all of them. And I think there's a lot closer scrutiny now on like the disclosure page that Flock puts up where you get to see kind of how the data is accessed. Uh, but I think it was like, you know, I've been doing this like privacy and safety work for a while, but it made me realize I still have room to be surprised, and I think that like I was surprised that everybody in my community was very united on privacy. It might be the thing on which we most agree, and that was like so heartwarming in such a way. I really can't wait to keep, keep building on that and using it as a way to connect with people.
CINDY COHN: So I'd like to follow up because we've been working hard to try to figure out how to convince people that you can't surveil yourself to safety, right? This stuff is always promoted as if it's going to make us safe. What stories did you hear that were resonating with people? What was the counter story from, you know, surveillance equals safety.
KATE BERTASH: I think the biggest story that I knew really connected with folks was actually the way in which that data was shared outside of our community. And there was somebody who was sitting in the room who I think had elaborated to that point that she said. I might like you as the sheriff, you know, these are all people who voted for the sheriff. We got to actually have this conversation face to face, which was really quite amazing. And they got to say to the sheriff, I voted for you. I might like you just fine. I might think you would be responsible logging into this stuff, but I don't know all those people who these platforms share this stuff with.
And Flock actually shares your data, unless you specifically request that they turn it off, and I think that was where they were like, you know, I don't trust those people, I don't know those people.
I also don't know your successor. Who's gonna get this? If we give this power to this office, I might not trust the future sheriff as much. And in a small town, like, that personal relationship matters a lot. And I think it was like really helpful to kind of take it out of this, you know, I am obviously very concerned about the ways in which they're, you know, abusive of policing technology and power. I think though, because like so many of these people are people who are your neighbors and you know them, it was so helpful to kind of put it in terms of like, you know, I don't want you to think it's about whether or not I trust your confidence personally.
It's about rather what we maybe owe each other. And you know, I wish you had asked me first, and it became a very like, powerful personal experience and a personal narrative. And, and I think even at the end of the night, like by the second night, I think the sheriff's department had really changed their tune a lot.
And I said to them, I was like, this is the longest we've ever gotten to talk to each other. And I think that's a great thing.
CINDY COHN: I think that's really great. And what I love about this is landing, it really, you know, community has come up over and over again in the way that we've talked to different people about what's important about making technology serve people.
KATE BERTASH: Yeah, people make these decisions very emotionally. And I think it was really nice to be able to talk about trust and relationships and communication because so much of the conversation when it's just held online, gets pulled into, I think everybody in this room our least favorite phrase. If you're not doing anything wrong, why do you care about being surveilled?
And it's just sort of like, well, it's not about whether or not I'm committing a crime. It's about whether or not, you know, we've had a discussion about what we should all know about each other, or like, why don't you just come over and ask me first.
I still want our community to have the ability to get people’s stolen cars back or to like find somebody who is like a, a lost senior adult or, or a child who's been abducted, you know? But these are like problems. Then we get to solve together rather than in this like adversarial manner where everybody's an obstacle to some public good.
JASON KELLEY: One of the things that I think a lot of the people we talk with, but I think you in particular are bringing to this conversation is, I don't know, optimism, joy, creativity.
You're someone who is dealing with some complicated, difficult, often depressing stuff. And you think about how to get people involved in ways that aren't, you know, uh, using the word dystopia, which is a word we use too much at e fff because it's too often becoming true. Cindy, I think mentioned earlier the adversarial fashion line. I think you've done a lot of work in getting people who aren't necessarily engineers thinking about like data issues clearly.
Tell us a little bit about the adversarial fashion work and also just, you know, how we get more people involved in protecting privacy that aren't necessarily the ones working at Facebook, right?
KATE BERTASH: So one of the most fun things about the adversarial fashion line, uh, was in, in kind of researching how I was gonna do that. The reason I did it is because I actually spent some of my free time designing fabrics, like mostly stuff with little, you know, manatees or cats on them, like silly things for kids.
And so I was like, yeah, it's, it's a surface pattern. I could definitely do that. Seems easy. Uh, and I got to research and find out more about sort of the role that art has in a lot of anti-surveillance movements. There's a lot of really cool anti surveillance art projects. Uh, it has been amazing as I present adversarial fashion, uh, in different places to kinda show off how that works.
So the way that the adversarial fashion line works is that these clothes have basically, you know, see these sort of iterations of what kind of look like plates on them. And automated license plate readers are kind of interesting in that they're, what I guess the system with low specificity is, is the way that a software engineer might term it, which is that they are working on a highway at, you know, 60, 70 miles an hour.
They're ingesting hundreds, sometimes thousands of plates a minute. So they really have to just be generous in what they're willing to ingest. So they, they put the vacuum and things like picket fences and billboards. And so clothing was kind of trivial, frankly, to get them to pick that up as well.
And what was really nice about the example of, you know, like a shirt that. You know, could be read as a car by some of these systems. And it was very easy to show, especially on some of the open source systems that are the exact same models deployed in surveillance technology that's bought and sold, uh, that, you know, you would really think differently than about your plate being seen someplace as sort of something that might implicate you in a crime or determine a pattern of behavior or justify somebody surveilling you further if it can be fooled by a t-shirt.
And you know, much like the example we talked about, uh, with, you know, conversations being held on a place like Facebook, anti surveillance artworks are cool in that they get to help people who feel like they're not technical enough or they don't really understand the underlying pieces of technology to have a concrete example that they can form a really strong reaction to. I know that some of the people who had really thrilled me that they were very excited about were like criminal defense attorneys reached out and asked a bunch of questions.
We have a lot of other people who are artists or designers who are like, how did you learn to use these systems? Did you need to know how to code? And I'm like, no, you can just roll them up on, you know, there's actually a bunch of a LPR apps that are available on, you know, the Apple store or that you can use on your computer, on your phone and test out the things that you've made.
And this actually works for many other systems. So, you know, facial recognition systems, if you wanna play around and come up with really great, you know. Clothing or masks or makeup or something, you can actually test it with the facial recognition piece of Instagram or any of these different types of applications.
It's a lot of fun. I love getting to answer people's questions. I love seeing the kind of creative spark that they're like, oh yeah, maybe I am smart enough to understand this, or to try and fool it on my own. Or know that like these systems aren't maybe as complex or smart as I give them credit for.
JASON KELLEY: What I like about this especially is that you are, you know, pointing out that this stuff is actually not that complicated and we've moved into a world where often the kind of digital spaces we live in, the technology we use feels so opaque that people can't even understand how to begin to like modify it, or to understand how it works or how they would build it themselves.
It's something we've been talking about with other people about how there's sort of like a moment where you realize that you can modify the digital world or that you can. You, you know how it works. Was there a moment in your work or in your life, um, where you realized that you could sort of understand that technology was there FOR you, not just there like to be thrust upon you?
KATE BERTASH: You know, it might be a little bit late in my life, but I think when I first got this job and I was like, oh my gosh, what am I going to do to really help kind of break through the many types of like privacy and safety problems that are facing this community, somebody had said, Kate, you should go to Def Con, and I went to Def Con, my very first one, and I was like blown back in my chair.
Defcon is America's largest hacker conference. It takes place every single year in Las Vegas and I think going there, you see, not only are these presentations on things that people have broken, but then there are places called villages that you walk through and people show you how to break systems or why, actually, it should be a lot harder to break this than it is.
Like the voting village. They buy old voting machines off of eBay and then, you know, teach everyone who walks in within, you know, 20 minutes how you can break into a voting machine. And it was just this, like, moment where I realized that you don't have to take technology as it is given to you. We all deserve technology that has our back and, and can't be modified or broken to hurt us.
And you can do that by yourself, sort of like actively tinkering on it. And I think that spirit of irreverence Really carried through to a lot of the work that we do with Digital Defense Fund, where we get people all the time who, like, they come in and they are worried about absolutely everything. It's so hard to decide what bite of the elephant to take first on, you know, improving the safety and privacy for the team and how they work and the patients that they serve.
But then we get to kind of show people some great examples of how actually this. Isn't quite as complicated as you might think. I'm gonna walk you through sort of the difference of like getting to use, like, one app text versus another, or turning on two factor.
We love tools like have I been pwned because they kind of help shape that understanding. You know, like you think about how a hacker gets a password, it feels so abstract or like technical, and then you realize, oh, actually when somebody breaks these, they buy and sell them, and then somebody just takes old passwords and reuses them.
That seems far more intuitive. I can now understand the ecosystem and the logic that's used behind so much of security and it builds on itself. And I think the thing that I'm most proud of is that we not only have this community of folks that we've worked with to improve their safety that we introduced to personal, you know, professional development opportunities to keep growing that understanding. We also manage an amazing community of technologists who build their own systems.
There's one group called the DC Abortion Fund who built their own case management platform because they were not being served by any of these corporate or enterprise options that charge way too much. They have like, you know, dozens of case managers, so that many seats was never gonna be affordable. And so they just sat down and they, you know, worked with Code For DC and they built it out, hand in hand.
And that is a project that I always point to as like, you know, it took somebody saying to themselves, I deserve better than this, and I can learn from everything I like about, you know, systems that you can buy and sell, but also like our community's gonna build what we need.
And to be supported to do that and have that encouragement is, is one of the reasons that I'm so proud that, um, over these years, the number of sort of self-built and community built software projects and other types of like ways that people deploy more secure technology to each other and teach each other has grown by leaps and bounds.
My job is so different now than what it was eight years ago because people are hungry for it. They know that they are, you know, ready to become their own experts in their communities. And the requests that we get then for, for more train the trainer type of material, or to help equip people to bring this back to their space the way, you know, I brought my ALPR presentation back to my own community. It's great to see that everyone is so much more encouraged, especially in these times when like systems are unstable, nonprofits spin up and down. We all have funding problems that have very little often to do with the demand for those resources, that that's not the end of the story.
So, yeah, I love it. It's been a wonderful journey, seeing how everything has changed from, like you said, that spirit of, of being always worried that things are getting worse, focusing on this dystopia, to seeing sort of, you know, how our own community has expanded its imagination. It's really wonderful. //
CINDY COHN: What a joy it is to talk to someone like Kate. She brings this spirit of irreverence that I think is great that she centers on Defcon because that's a community that definitely takes security seriously, but don't take themselves very seriously. So I really, I love that attitude and how important that is, I hear, for building community, building resilience through what are pretty dark times for the community that she fundamentally, you know, works with.
JASON KELLEY: And building that understanding that you have the not just ability, but like the right to work with the technology that is presented to you and to understand it and to take it apart and to rebuild it. All of that is, I think, critical to, you know, building the better internet that we want.
And Kate really shows how just, you know, going to the DEF Con Village can change your whole mind about that sort of thing, and hopefully people who don't have technical skills will recognize that you actually don't necessarily need them to do what she's describing. That's another thing that she said that I really liked, which is this, that, you know, she could show up in a room and talk to 40 people about surveillance and she doesn't have to talk about it at a, you know, technical level really, just saying, Hey, here's how this works. Did you know that? And anyone can do that. You know, you just have to show up.
CINDY COHN: Yeah. And how important these, like hyperlocal conversations are to really getting a handle on combating this idea that we can surveil ourselves to safety. What I really loved about that story, about gathering her community together, including the sheriff, is that, you know, they actually had a real conversation about the impact of what the sheriff was, was, is doing with Alps and really were able to be like, you know, look, I want you to be able to catch people who are stealing cars, but also there are these other ramifications and really bringing it down to a human level as one of the ways we get people to kind of stop thinking that we can surveil ourselves to safety. Then that technology can just replace the kind of individual community-based conversations we need to have.
JASON KELLEY: Yeah. She really is maybe one of the best people I've ever spoken to at bringing it down to that human level.
CINDY COHN: I think of people like Kate as the connective tissue between the communities that really need technologies that serve them, and the people who either develop those technologies or think about them or advocacy groups like us who are kind of doing the policy level work or the national level or even international level work on this.
We need those, those bridges between the communities that need technologies and the people who really think about it in the kind of broader perspective or develop it and deploy it.
JASON KELLEY: I think the thing that I'm gonna take away from this most is again, just Kate's creativity and the fact that she's so optimistic and this is such a difficult topic and, and we're living in such, you know, easily described as dystopic times. Um, but, uh, she's sort of alive with the idea that it doesn't have to be that way, which is really the, the whole point of the podcast. So she embodied it really well.
CINDY COHN: Yep. And this season we're gonna be really featuring the technologies of freedom, the technologies we need in these particular times.
And Kate is just one example of so many people who are really bright spots here and pointing the way to, you know, how we can fix the internet and build ourselves a better future.
JASON KELLEY: Thanks for joining us for this episode – and this new season! – of How to Fix the Internet.
If you have feedback or suggestions, we'd love to hear from you. Visit EFF dot org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe even pick up some merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of BeatMower with Reed Mathis
How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.
We’ll see you next time.
I’m Jason Kelley.
CINDY COHN: And I’m Cindy Cohn.
The rich bear their fair share of climate costs
Nature Climate Change, Published online: 07 May 2025; doi:10.1038/s41558-025-02329-7
It has long been recognized that the highest-emitting regions should bear disproportionate responsibility for climate action. Now, a study shows how the highest-income individuals have specifically contributed to climate impacts worldwide.High-income groups disproportionately contribute to climate extremes worldwide
Nature Climate Change, Published online: 07 May 2025; doi:10.1038/s41558-025-02325-x
While climate injustice is widely recognized, a quantification of how emissions inequality translates into unequal accountability is still lacking. Here researchers examine how affluent groups disproportionately contribute to the increase in mean temperature and the frequency of extreme events.Using AI to explore the 3D structure of the genome
Inside every human cell, 2 meters of DNA is crammed into a nucleus that is only one-hundredth of a millimeter in diameter.
To fit inside that tiny space, the genome must fold into a complex structure known as chromatin, made up of DNA and proteins. The structure of that chromatin, in turn, helps to determine which of the genes will be expressed in a given cell. Neurons, skin cells, and immune cells each express different genes depending on which of their genes are accessible to be transcribed.
Deciphering those structures experimentally is a time-consuming process, making it difficult to compare the 3D genome structures found in different cell types. MIT Professor Bin Zhang is taking a computational approach to this challenge, using computer simulations and generative artificial intelligence to determine these structures.
“Regulation of gene expression relies on the 3D genome structure, so the hope is that if we can fully understand those structures, then we could understand where this cellular diversity comes from,” says Zhang, an associate professor of chemistry.
From the farm to the lab
Zhang first became interested in chemistry when his brother, who was four years older, bought some lab equipment and started performing experiments at home.
“He would bring test tubes and some reagents home and do the experiment there. I didn’t really know what he was doing back then, but I was really fascinated with all the bright colors and the smoke and the odors that could come from the reactions. That really captivated my attention,” Zhang says.
His brother later became the first person from Zhang’s rural village to go to college. That was the first time Zhang had an inkling that it might be possible to pursue a future other than following in the footsteps of his parents, who were farmers in China’s Anhui province.
“Growing up, I would have never imagined doing science or working as a faculty member in America,” Zhang says. “When my brother went to college, that really opened up my perspective, and I realized I didn’t have to follow my parents’ path and become a farmer. That led me to think that I could go to college and study more chemistry.”
Zhang attended the University of Science and Technology in Hefei, China, where he majored in chemical physics. He enjoyed his studies and discovered computational chemistry and computational research, which became his new fascination.
“Computational chemistry combines chemistry with other subjects I love — math and physics — and brings a sense of rigor and reasoning to the otherwise more empirical rules,” he says. “I could use programming to solve interesting chemistry problems and test my own ideas very quickly.”
After graduating from college, he decided to continue his studies in the United States, which he recalled thinking was “the pinnacle of academics.” At Caltech, he worked with Thomas Miller, a professor of chemistry who used computational methods to understand molecular processes such as protein folding.
For Zhang’s PhD research, he studied a transmembrane protein that acts as a channel to allow other proteins to pass through the cell membrane. This protein, called translocon, can also open a side gate within the membrane, so that proteins that are meant to be embedded in the membrane can exit directly into the membrane.
“It’s really a remarkable protein, but it wasn’t clear how it worked,” Zhang says. “I built a computational model to understand the molecular mechanisms that dictate what are the molecular features that allow certain proteins to go into the membrane, while other proteins get secreted.”
Turning to the genome
After finishing grad school, Zhang’s research focus shifted from proteins to the genome. At Rice University, he did a postdoc with Peter Wolynes, a professor of chemistry who had made many key discoveries in the dynamics of protein folding. Around the time that Zhang joined the lab, Wolynes turned his attention to the structure of the genome, and Zhang decided to do the same.
Unlike proteins, which tend to have highly structured regions that can be studied using X-ray crystallography or cryo-EM, DNA is a very globular molecule that doesn’t lend itself to those types of analysis.
A few years earlier, in 2009, researchers at the Broad Institute, the University of Massachusetts Medical School, MIT, and Harvard University had developed a technique for studying the genome’s structure by cross-linking DNA in a cell’s nucleus. Researchers can then determine which segments are located near each other by shredding the DNA into many tiny pieces and sequencing it.
Zhang and Wolynes used data generated by this technique, known as Hi-C, to explore the question of whether DNA forms knots when it’s condensed in the nucleus, similar to how a strand of Christmas lights may become tangled when crammed into a box for storage.
“If DNA was just like a regular polymer, you would expect that it will become tangled and form knots. But that could be very detrimental for biology, because the genome is not just sitting there passively. It has to go through cell division, and also all this molecular machinery has to interact with the genome and transcribe it into RNA, and having knots will create a lot of unnecessary barriers,” Zhang says.
They found that, unlike Christmas lights, DNA does not form any knots even when packed into the cell nucleus, and they built a computational model allowing them to test hypotheses for how the genome is able to avoid those entanglements.
Since joining the MIT faculty in 2016, Zhang has continued developing models of how the genome behaves in 3D space, using molecular dynamic simulations. In one area of research, his lab is studying how differences between the genome structures of neurons and other brain cells give rise to their unique functions, and they are also exploring how misfolding of the genome may lead to diseases such as Alzheimer’s.
When it comes to connecting genome structure and function, Zhang believes that generative AI methods will also be essential. In a recent study, he and his students reported a new computational model, ChromoGen, that uses generative AI to predict the 3D structures of genomic regions, based on their DNA sequences.
“I think that in the future, we will have both components: generative AI and also theoretical chemistry-based approaches,” he says. “They nicely complement each other and allow us to both build accurate 3D structures and understand how those structures arise from the underlying physical forces.”
How can India decarbonize its coal-dependent electric power system?
As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.
To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.
Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.
To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?
With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.
First step: Develop the needed dataset
An important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.
Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.
Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.
In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.
Next step: Investigate decarbonization options
Equipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.
Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.
Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.
In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”
Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.
High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.
Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.
Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.
Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)
Key findings
Assuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.
Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.
Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.
The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly.
The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”
But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.
Some concerns
While those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.
The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”
So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.
CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.
But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, "It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”
Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable.