Feed aggregator
ExxonMobil Lobbyist Caught Hacking Climate Activists
The Department of Justice is investigating a lobbying firm representing ExxonMobil for hacking the phones of climate activists:
The hacking was allegedly commissioned by a Washington, D.C., lobbying firm, according to a lawyer representing the U.S. government. The firm, in turn, was allegedly working on behalf of one of the world’s largest oil and gas companies, based in Texas, that wanted to discredit groups and individuals involved in climate litigation, according to the lawyer for the U.S. government. In court documents, the Justice Department does not name either company...
Disrupted disaster aid prompts fears of delayed recovery ‘for years’
Trump’s assault on climate programs begins
They hoped new carbon markets would offset Trump. Advocates are glum.
Trump's Transportation chief calls for lowering CAFE standards
Climate change worsened LA wildfires, researchers say
Costs plus Trump deliver double whammy to blue states on climate
California bill would let insurers sue oil companies to avoid raising rates
Poland urges Tesla boycott after Musk’s call to ‘move past’ Nazi guilt
Let’s kill the Green Deal together, far-right leader urges EU’s conservatives
New training approach could help AI agents perform better in uncertain conditions
A home robot trained to perform household tasks in a factory may fail to effectively scrub the sink or take out the trash when deployed in a user’s kitchen, since this new environment differs from its training space.
To avoid this, engineers often try to match the simulated training environment as closely as possible with the real world where the agent will be deployed.
However, researchers from MIT and elsewhere have now found that, despite this conventional wisdom, sometimes training in a completely different environment yields a better-performing artificial intelligence agent.
Their results indicate that, in some situations, training a simulated AI agent in a world with less uncertainty, or “noise,” enabled it to perform better than a competing AI agent trained in the same, noisy world they used to test both agents.
The researchers call this unexpected phenomenon the indoor training effect.
“If we learn to play tennis in an indoor environment where there is no noise, we might be able to more easily master different shots. Then, if we move to a noisier environment, like a windy tennis court, we could have a higher probability of playing tennis well than if we started learning in the windy environment,” explains Serena Bono, a research assistant in the MIT Media Lab and lead author of a paper on the indoor training effect.
The researchers studied this phenomenon by training AI agents to play Atari games, which they modified by adding some unpredictability. They were surprised to find that the indoor training effect consistently occurred across Atari games and game variations.
They hope these results fuel additional research toward developing better training methods for AI agents.
“This is an entirely new axis to think about. Rather than trying to match the training and testing environments, we may be able to construct simulated environments where an AI agent learns even better,” adds co-author Spandan Madan, a graduate student at Harvard University.
Bono and Madan are joined on the paper by Ishaan Grover, an MIT graduate student; Mao Yasueda, a graduate student at Yale University; Cynthia Breazeal, professor of media arts and sciences and leader of the Personal Robotics Group in the MIT Media Lab; Hanspeter Pfister, the An Wang Professor of Computer Science at Harvard; and Gabriel Kreiman, a professor at Harvard Medical School. The research will be presented at the Association for the Advancement of Artificial Intelligence Conference.
Training troubles
The researchers set out to explore why reinforcement learning agents tend to have such dismal performance when tested on environments that differ from their training space.
Reinforcement learning is a trial-and-error method in which the agent explores a training space and learns to take actions that maximize its reward.
The team developed a technique to explicitly add a certain amount of noise to one element of the reinforcement learning problem called the transition function. The transition function defines the probability an agent will move from one state to another, based on the action it chooses.
If the agent is playing Pac-Man, a transition function might define the probability that ghosts on the game board will move up, down, left, or right. In standard reinforcement learning, the AI would be trained and tested using the same transition function.
The researchers added noise to the transition function with this conventional approach and, as expected, it hurt the agent’s Pac-Man performance.
But when the researchers trained the agent with a noise-free Pac-Man game, then tested it in an environment where they injected noise into the transition function, it performed better than an agent trained on the noisy game.
“The rule of thumb is that you should try to capture the deployment condition’s transition function as well as you can during training to get the most bang for your buck. We really tested this insight to death because we couldn’t believe it ourselves,” Madan says.
Injecting varying amounts of noise into the transition function let the researchers test many environments, but it didn’t create realistic games. The more noise they injected into Pac-Man, the more likely ghosts would randomly teleport to different squares.
To see if the indoor training effect occurred in normal Pac-Man games, they adjusted underlying probabilities so ghosts moved normally but were more likely to move up and down, rather than left and right. AI agents trained in noise-free environments still performed better in these realistic games.
“It was not only due to the way we added noise to create ad hoc environments. This seems to be a property of the reinforcement learning problem. And that was even more surprising to see,” Bono says.
Exploration explanations
When the researchers dug deeper in search of an explanation, they saw some correlations in how the AI agents explore the training space.
When both AI agents explore mostly the same areas, the agent trained in the non-noisy environment performs better, perhaps because it is easier for the agent to learn the rules of the game without the interference of noise.
If their exploration patterns are different, then the agent trained in the noisy environment tends to perform better. This might occur because the agent needs to understand patterns it can’t learn in the noise-free environment.
“If I only learn to play tennis with my forehand in the non-noisy environment, but then in the noisy one I have to also play with my backhand, I won’t play as well in the non-noisy environment,” Bono explains.
In the future, the researchers hope to explore how the indoor training effect might occur in more complex reinforcement learning environments, or with other techniques like computer vision and natural language processing. They also want to build training environments designed to leverage the indoor training effect, which could help AI agents perform better in uncertain environments.
EFF to State AGs: Time to Investigate Crisis Pregnancy Centers
Discovering that you’re pregnant can trigger a mix of emotions—excitement, uncertainty, or even distress—depending on your circumstances. Whatever your feelings are, your next steps will likely involve disclosing that news, along with other deeply personal information, to a medical provider or counselor as you explore your options.
Many people will choose to disclose that information to their trusted obstetricians, or visit their local Planned Parenthood clinic. Others, however, may instead turn to a crisis pregnancy center (CPC). Trouble is, some of these centers may not be doing a great job of prioritizing or protecting their clients’ privacy.
CPCs (also known as “fake clinics”) are facilities that are often connected to religious organizations and have a strong anti-abortion stance. While many offer pregnancy tests, counseling, and information, as well as limited medical services in some cases, they do not provide reproductive healthcare such as abortion or, in many cases, contraception. Some are licensed medical clinics; most are not. Either way, these services are a growing enterprise: in 2022, CPCs reportedly received $1.4 billion in revenue, including substantial federal and state funds.
Last year, researchers at the Campaign for Accountability filed multiple complaints urging attorneys general in five states—Idaho, Minnesota, Washington, Pennsylvania, and New Jersey—to investigate crisis pregnancy centers that allegedly had misrepresented, through their client intake process and/or websites, that information provided to them was protected by the Health Insurance Portability and Accountability Act (“HIPAA”).
Additionally, an incident in Louisiana raised concerns that CPCs may be sharing client information with other centers in their affiliated networks, without appropriate privacy or anonymity protections. In that case, a software training video inadvertently disclosed the names and personal information of roughly a dozen clients.
Unfortunately, these privacy practices aren’t confined to those states. For example, the Pregnancy Help Center, located in Missouri, states on its website that:
Pursuant to the Health Insurance Portability and Accountability Act (HIPAA), Pregnancy Help Center has developed a notice for patients, which provides a clear explanation of privacy rights and practices as it relates to private health information.
And its Notice of Privacy Practices suggests oversight by the U.S. Department of Health and Human, instructing clients who feel their rights were violated to:
file a complaint with the U.S. Department of Health and Human Services Office for Civil Rights by sending a letter to 200 Independence Avenue, S.W., Washington, D.C. 20201, calling 1-877-696-6775, or visiting www.hhs.gov/ocr/privacy/hipaa/complaints/.
Websites for centers in other states, such as Florida, Texas, and Arkansas, contain similar language.
As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices. Accordingly, EFF has called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable CPCs that engage in deceptive practices.
Regardless of your views on reproductive healthcare, we should all agree that privacy is a basic human right, and that consumers deserve transparency. Our elected officials have a responsibility to ensure that personal information, especially our sensitive medical data, is protected.
What Proponents of Digital Replica Laws Can Learn from the Digital Millennium Copyright Act
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation
Performers—and ordinary people—are understandably concerned that they may be replaced or defamed by AI-generated imitations. We’ve seen a host of state and federal bills designed to address that concern, but every one just generates new problems.
One of the most pernicious proposals is the NO FAKES Act, and Copyright Week is a good time to remember why. We’ve detailed the many problems of the bill before, but, ironically enough, one of the worst aspects is the bone it throws to critics who worry the legislation’s broad provisions and dramatic penalties will lead platforms to over-censor online expression: a safe harbor scheme modeled on the DMCA notice and takedown process.
In essence, platforms can avoid liability if they remove all instances of allegedly illegal content once they are notified that the content is unauthorized. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation, incurring a $5000 penalty – which will add up fast. The bill does offer one not very useful carveout: if a platform can prove in court that it had an objectively reasonable belief that the content was lawful, the penalties for getting it wrong are capped at $1 million.
The safe harbors offer cold comfort to platforms and the millions of people who rely on them to create, share, and access content. The DMCA notice and takedown process has offered important protections for the development of new venues for speech, helping creators finds audiences and vice versa. Without those protections, Hollywood would have had a veto right over all kinds of important speech tools and platforms, from basic internet service to social media and news sites to any other service that might be used to host or convey copyrighted content, thanks to copyright’s ruinous statutory penalties. The risks of accidentally facilitating infringement would have been just too high.
But the DMCA notice and takedown process has also been regularly abused to target lawful speech. Congress knew this was a risk, so it built in some safeguards: a counter-notice process to help users get improperly targeted content restored, and a process for deterring that abuse in the first place by allowing users to hold notice senders accountable when they misuse the process. Unfortunately, some courts have mistakenly interpreted the latter provisions to require showing that the sender subjectively knew it was lying when it claimed the content was unlawful. That standard is very hard to meet in most cases.
Proponents of a new digital replica right could have learned from that experience and created a notice process with strong provisions against abuse. Those provisions are even more necessary here, where it would be even harder for providers to know whether a notice is false. Instead, NO FAKES offers fewer safeguards than the DMCA. For example, while the DMCA puts the burden on the rightsholder to put up or shut up (i.e., file a lawsuit) if a speaker pushes back and explains why the content is lawful, NO FAKES instead puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.
And the NO FAKES provisions to allow improperly targeted speakers to hold the notice abuser accountable will offer as little deterrent as the roughly parallel provisions in the DMCA. As with the DMCA, a speaker must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believe the lie to be true, no matter how unreasonable that belief.
If proponents want to protect online expression for everyone, at a minimum they should redraft the counter-notice process to more closely model the DMCA, and clarify that abusers, like platforms, will be held to an objective knowledge standard. If they don’t, the advent of digital replicas will, ironically enough, turn out to be an excuse to strangle all kinds of new and old creativity.
California Law Enforcement Misused State Databases More Than 7,000 Times in 2023
The Los Angeles County Sheriff’s Department (LACSD) committed wholesale abuse of sensitive criminal justice databases in 2023, violating a specific rule against searching the data to run background checks for concealed carry firearm permits.
The sheriff’s department’s 6,789 abuses made up a majority of the record 7,275 violations across California that were reported to the state Department of Justice (CADOJ) in 2023 regarding the California Law Enforcement Telecommunications System (CLETS).
Records obtained by EFF also included numerous cases of other forms of database abuse in 2023, such as police allegedly using data for personal vendettas. While many violations resulted only in officers or other staff being retrained in appropriate use of the database, departments across the state reported that violations in 2023 led to 24 officers being suspended, six officers resigning, and nine being fired.
CLETS contains a lot of sensitive information and is meant to provide officers in California with access to a variety of databases, including records from the Department of Motor Vehicles, the National Law Enforcement Telecommunications System, Criminal Justice Information Services, and the National Crime Information Center. Law enforcement agencies with access to CLETS are required to inform the state Justice Department of any investigations and discipline related to misuse of the system. This mandatory reporting helps to provide oversight and transparency around how local agencies are using and abusing their access to the array of databases.
A slide from a Long Beach Police Department training for new recruits.
Misuse can take many forms, ranging from sharing passwords to using the system to look up romantic partners or celebrities. In 2019, CADOJ declared that using CLETS data for "immigration enforcement" is considered misuse under the California Values Act.
EFF periodically files California Public Records Act requests for the data and records generated by these CLETS misuse disclosures. To help improve access to this data, EFF's investigations team has compiled and compressed that information from the years 2019 - 2023 for public download. Researchers and journalists can look up the individual data per agency year-to-year.
Download the 2019-2023 data here. Data from previous years is available here: 2010-2014, 2015, 2016, 2017, 2018.
California agencies are required to report misuse of CLETS to CADOJ by February 1 of the following year, which means numbers for 2024 are due to the state agency at the end of this month. However, it often takes the state several more months to follow up with agencies that do not respond and to enter information from the individual forms into a database.
Across California between 2019 and 2023, there have been:
- 761 investigations of CLETS misuse, resulting in findings of at least 7,635 individual violations of the system’s rules
- 55 officer suspensions, 50 resignations, and 42 firings related to CLETS misuse
- six misdemeanor convictions and one felony conviction related to CLETS misuse
As we reviewed the data made public since 2019, there were a few standout situations worth additional reporting. For example, LACSD in 2023 conducted one investigation into CLETS misuse which resulted in substantiating thousands of misuse claims. The Riverside County Sheriff's Office and Pomona Police Department also found hundreds of violations of access to CLETS the same year.
Some of the highest profile cases include:
- LACSD’s use of criminal justice data for concealed carry permit research, which is specifically forbidden by CLETS rules. According to meeting notes of the CLETS oversight body, LACSD retrained all staff and implemented new processes. However, state Justice Department officials acknowledged that this problem was not unique, and they had documented other agencies abusing the data in the same way.
- A Redding Police Department officer in 2021 was charged with six misdemeanors after being accused of accessing CLETS to set up a traffic stop for his fiancée's ex-husband, resulting in the man's car being towed and impounded, the local outlet A News Cafe reported. Court records show the officer was fired, but he was ultimately acquitted by a jury in the criminal case. He now works for a different police department 30 miles away.
- The Folsom Police Department in 2021 fired an officer who was accused of sending racist texts and engaging in sexual misconduct, as well as abusing CLETS. However, the Sacramento County District Attorney told a local TV station it declined to file charges, citing insufficient evidence.
- A Madera Police Officer in 2021 resigned and pleaded guilty to accessing CLETS and providing that information to an unauthorized person. He received a one-year suspended sentence and 100 hours of community service, according to court records. In a statement, the police department said the individual's "behavior was absolutely inappropriate" and "his actions tarnish the nobility of our profession."
- A California Highway Patrol officer was charged with improperly accessing CLETS to investigate vehicles his friend was interested in purchasing as part of his automotive business.
The San Francisco Police Department, which failed to provide its numbers to CLETS in 2023, may be reporting at least one violation from the past year, according to a May 2024 report of sustained complaints, which lists one substantiated violation involving “Computer/CAD/CLETS Misuse.”
CLETS is only one of many massive databases available to law enforcement, but it is one of the very few with a mandatory reporting requirement for abuse; violations of other systems likely never go reported to a state oversight body or at all. The sheer amount of misuse should serve as a warning that other systems police use, such as automated license plate reader and face recognition databases, are likely also being abused at a high rate–or even higher, since they are not subject to the same scrutiny as CLETS.
Related Cases: California Law Enforcement Telecommunications SystemCISA Under Trump
Jen Easterly is out as the Director of CISA. Read her final interview:
There’s a lot of unfinished business. We have made an impact through our ransomware vulnerability warning pilot and our pre-ransomware notification initiative, and I’m really proud of that, because we work on preventing somebody from having their worst day. But ransomware is still a problem. We have been laser-focused on PRC cyber actors. That will continue to be a huge problem. I’m really proud of where we are, but there’s much, much more work to be done. There are things that I think we can continue driving, that the next administration, I hope, will look at, because, frankly, cybersecurity is a national security issue...