Feed aggregator
Fired FEMA chief threatened to quit weeks ago
Congress counters Trump with massive FEMA restructuring plan
Trump targets carbon removal project in speaker’s district
2 offshore wind projects move forward despite Trump’s threats
Colorado legislators nix plan to tame property insurance costs
Amtrak to ax 450 jobs as part of $100M spending cut
California drivers could pay an extra $700 a year for gasoline
New York governor’s budget boosts environmental, climate spending
Bank watchdogs flag near-term risks of delaying climate efforts
Zimbabwe sets up carbon markets watchdog to govern trading activity
UK water firms warned to protect supply amid drought worries
This April was world’s second-hottest on record, EU scientists say
Ping pong bot returns shots with high-speed precision
MIT engineers are getting in on the robotic ping pong game with a powerful, lightweight design that returns shots with high-speed precision.
The new table tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping pong table and wields a standard ping pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types — loop, drive, or chop — to precisely hit the ball to a desired location on the table with various types of spin.
In tests, the engineers threw 150 balls at the robot, one after the other, from across the ping pong table. The bot successfully returned the balls with a hit rate of about 88 percent across all three swing types. The robot’s strike speed approaches the top return speeds of human players and is faster than that of other robotic table tennis designs.
Now, the team is looking to increase the robot’s playing radius so that it can return a wider variety of shots. Then, they envision the setup could be a viable competitor in the growing field of smart robotic training systems.
Beyond the game, the team says the table tennis tech could be adapted to improve the speed and responsiveness of humanoid robots, particularly for search-and-rescue scenarios, and situations in a which a robot would need to quickly react or anticipate.
“The problems that we’re solving, specifically related to intercepting objects really quickly and precisely, could potentially be useful in scenarios where a robot has to carry out dynamic maneuvers and plan where its end effector will meet an object, in real-time,” says MIT graduate student David Nguyen.
Nguyen is a co-author of the new study, along with MIT graduate student Kendrick Cancio and Sangbae Kim, associate professor of mechanical engineering and head of the MIT Biomimetics Robotics Lab. The researchers will present the results of those experiments in a paper at the IEEE International Conference on Robotics and Automation (ICRA) this month.
Precise play
Building robots to play ping pong is a challenge that researchers have taken up since the 1980s. The problem requires a unique combination of technologies, including high-speed machine vision, fast and nimble motors and actuators, precise manipulator control, and accurate, real-time prediction, as well as higher-level planning of game strategy.
“If you think of the spectrum of control problems in robotics, we have on one end manipulation, which is usually slow and very precise, such as picking up an object and making sure you’re grasping it well. On the other end, you have locomotion, which is about being dynamic and adapting to perturbations in your system,” Nguyen explains. “Ping pong sits in between those. You’re still doing manipulation, in that you have to be precise in hitting the ball, but you have to hit it within 300 milliseconds. So, it balances similar problems of dynamic locomotion and precise manipulation.”
Ping pong robots have come a long way since the 1980s, most recently with designs by Omron and Google DeepMind that employ artificial intelligence techniques to “learn” from previous ping pong data, to improve a robot’s performance against an increasing variety of strokes and shots. These designs have been shown to be fast and precise enough to rally with intermediate human players.
“These are really specialized robots designed to play ping pong,” Cancio says. “With our robot, we are exploring how the techniques used in playing ping pong could translate to a more generalized system, like a humanoid or anthropomorphic robot that can do many different, useful things.”
Game control
For their new design, the researchers modified a lightweight, high-power robotic arm that Kim’s lab developed as part of the MIT Humanoid — a bipedal, two-armed robot that is about the size of a small child. The group is using the robot to test various dynamic maneuvers, including navigating uneven and varying terrain as well as jumping, running, and doing backflips, with the aim of one day deploying such robots for search-and-rescue operations.
Each of the humanoid’s arms has four joints, or degrees of freedom, which are each controlled by an electrical motor. Cancio, Nguyen, and Kim built a similar robotic arm, which they adapted for ping pong by adding an additional degree of freedom in the wrist to allow for control of a paddle.
The team fixed the robotic arm to a table at one end of a standard ping pong table and set up high-speed motion capture cameras around the table to track balls that are bounced at the robot. They also developed optimal control algorithms that predict, based on the principles of math and physics, what speed and paddle orientation the arm should execute to hit an incoming ball with a particular type of swing: loop (or topspin), drive (straight-on), or chop (backspin).
They implemented the algorithms using three computers that simultaneously processed camera images, estimated a ball’s real-time state, and translated these estimations to commands for the robot’s motors to quickly react and take a swing.
After consecutively bouncing 150 balls at the arm, they found the robot’s hit rate, or accuracy of returning the ball, was about the same for all three types of swings: 88.4 percent for loop strikes, 89.2 percent for chops, and 87.5 percent for drives. They have since tuned the robot’s reaction time and found the arm hits balls faster than existing systems, at velocities of 20 meters per second.
In their paper, the team reports that the robot’s strike speed, or the speed at which the paddle hits the ball, is on average 11 meters per second. Advanced human players have been known to return balls at speeds of between 21 to 25 meters second. Since writing up the results of their initial experiments, the researchers have further tweaked the system, and have recorded strike speeds of up to 19 meters per second (about 42 miles per hour).
“Some of the goal of this project is to say we can reach the same level of athleticism that people have,” Nguyen says. “And in terms of strike speed, we’re getting really, really close.”
Their follow-up work has also enabled the robot to aim. The team incorporated control algorithms into the system that predict not only how but where to hit an incoming ball. With its latest iteration, the researchers can set a target location on the table, and the robot will hit a ball to that same location.
Because it is fixed to the table, the robot has limited mobility and reach, and can mostly return balls that arrive within a crescent-shaped area around the midline of the table. In the future, the engineers plan to rig the bot on a gantry or wheeled platform, enabling it to cover more of the table and return a wider variety of shots.
“A big thing about table tennis is predicting the spin and trajectory of the ball, given how your opponent hit it, which is information that an automatic ball launcher won’t give you,” Cancio says. “A robot like this could mimic the maneuvers that an opponent would do in a game environment, in a way that helps humans play and improve.”
This research is supported, in part, by the Robotics and AI Institute.
System lets robots identify an object’s properties through handling
A human clearing junk out of an attic can often guess the contents of a box simply by picking it up and giving it a shake, without the need to see what’s inside. Researchers from MIT, Amazon Robotics, and the University of British Columbia have taught robots to do something similar.
They developed a technique that enables robots to use only internal sensors to learn about an object’s weight, softness, or contents by picking it up and gently shaking it. With their method, which does not require external measurement tools or cameras, the robot can accurately guess parameters like an object’s mass in a matter of seconds.
This low-cost technique could be especially useful in applications where cameras might be less effective, such as sorting objects in a dark basement or clearing rubble inside a building that partially collapsed after an earthquake.
Key to their approach is a simulation process that incorporates models of the robot and the object to rapidly identify characteristics of that object as the robot interacts with it.
The researchers’ technique is as good at guessing an object’s mass as some more complex and expensive methods that incorporate computer vision. In addition, their data-efficient approach is robust enough to handle many types of unseen scenarios.
“This idea is general, and I believe we are just scratching the surface of what a robot can learn in this way. My dream would be to have robots go out into the world, touch things and move things in their environments, and figure out the properties of everything they interact with on their own,” says Peter Yichen Chen, an MIT postdoc and lead author of a paper on this technique.
His coauthors include fellow MIT postdoc Chao Liu; Pingchuan Ma PhD ’25; Jack Eastman MEng ’24; Dylan Randle and Yuri Ivanov of Amazon Robotics; MIT professors of electrical engineering and computer science Daniela Rus, who leads MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and Wojciech Matusik, who leads the Computational Design and Fabrication Group within CSAIL. The research will be presented at the International Conference on Robotics and Automation.
Sensing signals
The researchers’ method leverages proprioception, which is a human or robot’s ability to sense its movement or position in space.
For instance, a human who lifts a dumbbell at the gym can sense the weight of that dumbbell in their wrist and bicep, even though they are holding the dumbbell in their hand. In the same way, a robot can “feel” the heaviness of an object through the multiple joints in its arm.
“A human doesn’t have super-accurate measurements of the joint angles in our fingers or the precise amount of torque we are applying to an object, but a robot does. We take advantage of these abilities,” Liu says.
As the robot lifts an object, the researchers’ system gathers signals from the robot’s joint encoders, which are sensors that detect the rotational position and speed of its joints during movement.
Most robots have joint encoders within the motors that drive their moveable parts, Liu adds. This makes their technique more cost-effective than some approaches because it doesn’t need extra components like tactile sensors or vision-tracking systems.
To estimate an object’s properties during robot-object interactions, their system relies on two models: one that simulates the robot and its motion and one that simulates the dynamics of the object.
“Having an accurate digital twin of the real-world is really important for the success of our method,” Chen adds.
Their algorithm “watches” the robot and object move during a physical interaction and uses joint encoder data to work backward and identify the properties of the object.
For instance, a heavier object will move slower than a light one if the robot applies the same amount of force.
Differentiable simulations
They utilize a technique called differentiable simulation, which allows the algorithm to predict how small changes in an object’s properties, like mass or softness, impact the robot’s ending joint position. The researchers built their simulations using NVIDIA’s Warp library, an open-source developer tool that supports differentiable simulations.
Once the differentiable simulation matches up with the robot’s real movements, the system has identified the correct property. The algorithm can do this in a matter of seconds and only needs to see one real-world trajectory of the robot in motion to perform the calculations.
“Technically, as long as you know the model of the object and how the robot can apply force to that object, you should be able to figure out the parameter you want to identify,” Liu says.
The researchers used their method to learn the mass and softness of an object, but their technique could also determine properties like moment of inertia or the viscosity of a fluid inside a container.
Plus, because their algorithm does not need an extensive dataset for training like some methods that rely on computer vision or external sensors, it would not be as susceptible to failure when faced with unseen environments or new objects.
In the future, the researchers want to try combining their method with computer vision to create a multimodal sensing technique that is even more powerful.
“This work is not trying to replace computer vision. Both methods have their pros and cons. But here we have shown that without a camera we can already figure out some of these properties,” Chen says.
They also want to explore applications with more complicated robotic systems, like soft robots, and more complex objects, including sloshing liquids or granular media like sand.
In the long run, they hope to apply this technique to improve robot learning, enabling future robots to quickly develop new manipulation skills and adapt to changes in their environments.
“Determining the physical properties of objects from data has long been a challenge in robotics, particularly when only limited or noisy measurements are available. This work is significant because it shows that robots can accurately infer properties like mass and softness using only their internal joint sensors, without relying on external cameras or specialized measurement tools,” says Miles Macklin, senior director of simulation technology at NVIDIA, who was not involved with this research.
This work is funded, in part, by Amazon and the GIST-CSAIL Research Program.
The FCC Must Reject Efforts to Lock Up Public Airwaves
President Trump’s attack on public broadcasting has attracted plenty of deserved attention, but there’s a far more technical, far more insidious policy change in the offing—one that will take away Americans’ right to unencumbered access to our publicly owned airwaves.
The FCC is quietly contemplating a fundamental restructuring of all broadcasting in the United States, via a new DRM-based standard for digital television equipment, enforced by a private “security authority” with control over licensing, encryption, and compliance. This move is confusingly called the “ATSC Transition” (ATSC is the digital TV standard the US switched to in 2009 – the “transition” here is to ATSC 3.0, a new version with built-in DRM).
The “ATSC Transition” is championed by the National Association of Broadcasters, who want to effectively privatize the public airwaves, allowing broadcasters to encrypt over-the-air programming, meaning that you will only be able to receive those encrypted shows if you buy a new TV with built-in DRM keys. It’s a tax on American TV viewers, forcing you to buy a new TV so you can continue to access a public resource you already own.
This may not strike you as a big deal. Lots of us have given up on broadcast and get all our TV over the internet. But millions of American still rely heavily or exclusively on broadcast television for everything from news to education to simple entertainment. Many of these viewers live in rural or tribal areas, and/or are low-income households who can least afford to “upgrade.” Historically, these viewers have been able to rely on access to broadcast because, by law, broadcasters get extremely valuable spectrum licenses in exchange for making their programming available for free to anyone within range of their broadcast antennas.
If broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them
Adding DRM to over-the-air broadcasts upends this system. The “ATSC Transition” is a really a transition from the century-old system of universally accessible programming to a privately controlled web of proprietary technological restrictions. It’s a transition from a system where anyone can come up with innovative new TV hardware to one where a centralized, unaccountable private authority gets a veto right over new devices.
DRM licensing schemes like this are innovation killers. Prime example: DVDs and DVD players, which have been subject to a similar central authority, and haven’t gotten a single new feature since the DVD player was introduced in 1995.
DRM is also incompatible with fundamental limits on copyright, like fair use. Those limits let you do things like record a daytime baseball game and then watch it after dinner, skipping the ads. Broadcasters would like to prevent that and DRM helps them do it. Keep in mind that bypassing or breaking a DRM system’s digital keys—even for lawful purposes like time-shifting, ad-skipping, security research, and so on—risks penalties under Section 1201 of the Digital Millennium Copyright Act. That is, unless you have the time and resources to beg the Copyright Office for an exemption (and, if the exemption is granted, to renew your plea every three years).
Broadcasters say they need this change to offer viewers new interactive features that will serve the public interest. But if broadcasters have cool new features the public will enjoy, they don’t need to force us to adopt them. The most reliable indicator that a new feature is cool and desirable is that people voluntarily install it. If the only way to get someone to use a new feature is to lock up the keys so they can’t turn it off, that’s a clear sign that the feature is not in the public interest.
That's why EFF joined Public Knowledge, Consumer Reports and others in urging the FCC to reject this terrible, horrible, no good, very bad idea and keep our airwaves free for all of us. We hope the agency listens, and puts the interests of millions of Americans above the private interests of a few powerful media cartels.
Appeals Court Sidesteps The Big Questions on Geofence Warrants
Another federal appeals court has ruled on controversial geofence warrants—sort of. Last week, the US Court of Appeals for the Fourth Circuit sitting en banc issued a single sentence opinion affirming the lower court opinion in United States v. Chatrie. The practical outcome of this sentence is clear: the evidence collected from a geofence warrant issued to Google can be used against the defendant in this case. But that is largely where the clarity ends, because the fifteen judges of the Fourth Circuit who heard the en banc appeal agreed on little else. The judges wrote a total of nine separate opinions, no single one of which received a majority of votes. Amid this fracture, the judges essentially deadlocked on important constitutional questions about whether geofence warrants are a Fourth Amendment search. As a result, the new opinion in Chatrie is a missed opportunity for the Fourth Circuit to join both other appellate courts to have considered the issue in finding geofence warrants unconstitutional.
Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area and time period both specified by law enforcement. This creates a high risk of suspicion falling on innocent people and can reveal sensitive and private information about where individuals have traveled in the past. Following intense scrutiny from the press and the public, Google announced changes to how it stores location data in late 2023, apparently with the effect of eventually making it impossible for the company to respond to geofence warrants.
Regardless, numerous criminal cases involving geofence evidence continue to make their way through the courts. The district court decision in Chatrie was one of the first, and it set an important precedent in finding the warrant overbroad and unconstitutional. However, the court allowed the government to use the evidence it obtained because it relied on the warrant in “good faith.” On appeal, a three judge panel of the Fourth Circuit voted 2-1 that the geofence warrant did not constitute a search at all. Later, the appeals court agreed to rehear the case en banc, in front of all active judges in the circuit. (EFF filed amicus briefs at both the panel and en banc stages of the appeal).
The only agreement among the fifteen judges who reheard the case was that the evidence should be allowed in, with at least eight relying on the good faith analysis. Meanwhile, seven judges argued that geofence warrants constitute a Fourth Amendment search in at least some fashion, while exactly seven disagreed. Although that means the appellate court did not rule on the Fourth Amendment implications of geofence warrants, neither did it vacate the lower court’s solid constitutional analysis.
Above all, it remains the case that every appellate court to rule on geofence warrants to date has found serious constitutional defects. As we explain in every brief we file in these cases, reverse warrants like these are very sort of “general searches” that the authors of the Fourth Amendment sought to
Dopamine signals when a fear can be forgotten
Dangers come but dangers also go, and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.
“Dopamine is essential to initiate fear extinction,” says Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author Susumu Tonegawa, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics within The Picower Institute for Learning and Memory at MIT, and a Howard Hughes Medical Institute (HHMI) investigator.
In 2020, Tonegawa’s lab showed that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably, those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.
In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the Proceedings of the National Academy of Sciences show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).
“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” says Zhang, who also led the 2020 study and is now a senior associate at Orbimed, a health care investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it — it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders, like PTSD.”
Forgetting fear
The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.
The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.
Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On Day One, they went to an enclosure where they experienced three mild shocks on the feet. On Day Two, they went back to the enclosure for 45 minutes, where they didn’t experience any new shocks — at first, the mice froze in anticipation of a shock, but then relaxed after about 15 minutes. On Day Three they returned again to test whether they had indeed extinguished the fear they showed at the beginning of Day Two.
The dopamine activity tracking revealed that during the shocks on Day One, Rspo2 neurons had the larger response to dopamine, but in the early moments of Day Two, when the anticipated shocks didn’t come and the mice eased up on freezing, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.
Causal connections
The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.
The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, over-expressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.
“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.
The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.
But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola says.
“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he says. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach, such as pBLA and use of dopaminergic modulation.”
Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the HHMI, the Freedom Together Foundation, and The Picower Institute.
Chinese AI Submersible
A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”
“Research rockets.” Sure.
...