Feed aggregator
How Cops Are Using Flock Safety's ALPR Network to Surveil Protesters and Activists
It's no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car.
Through an analysis of 10 months of nationwide searches on Flock Safety's servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock's national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate.
Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don't have reason to believe a targeted vehicle left the region.
Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between.
The Tulsa Police Department in Oklahoma was one of the most consistent users of Flock Safety's ALPR system for investigating protests, logging at least 38 such searches. This included running searches that corresponded to a protest against deportation raids in February, a protest at Tulsa City Hall in support of pro-Palestinian activist Mahmoud Khalil in March, and the No Kings protest in June. During the most recent No Kings protests in mid-October, agencies such as the Lisle Police Department in Illinois, the Oro Valley Police Department in Arizona, and the Putnam County (Tenn.) Sheriff's Office all ran protest-related searches.
While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a "reason" field in the Flock Safety system. Usually this is only a few words–or even just one.
In these cases, that word was often just “protest.”
Crime does sometimes occur at protests, whether that's property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest.
Search and Dissent2025 was an unprecedented year of street action. In June and again in October, thousands across the country mobilized under the banner of the “No Kings” movement—marches against government overreach, surveillance, and corporate power. By some estimates, the October demonstrations ranked among the largest single-day protests in U.S. history, filling the streets from Washington, D.C., to Portland, OR.
EFF identified 19 agencies that logged dozens of searches associated with the No Kings protests in June and October 2025. In some cases the "No Kings" was explicitly used, while in others the term "protest" was used but coincided with the massive protests.
Law Enforcement Agencies that Ran Searches Corresponding with "No Kings" Rallies
- Anaheim Police Department, Calif.
- Arizona Department of Public Safety
- Beaumont Police Department, Texas
- Charleston Police Department, SC
- Flagler County Sheriff's Office, Fla.
- Georgia State Patrol
- Lisle Police Department, Ill.
- Little Rock Police Department, Ark.
- Marion Police Department, Ohio
- Morristown Police Department, Tenn.
- Oro Valley Police Department, Ariz.
- Putnam County Sheriff's Office, Tenn.
- Richmond Police Department, Va.
- Riverside County Sheriff's Office, Calif.
- Salinas Police Department, Calif.
- San Bernardino County Sheriff's Office, Calif.
- Spartanburg Police Department, SC
- Tempe Police Department, Ariz.
- Tulsa Police Department, Okla.
- US Border Patrol
For example:
- In Washington state, the Spokane County Sheriff's Office listed "no kings" as the reason for three searches on June 13, 2025. The agency queried 95 camera networks, looking for vehicles matching the description of "work van," "bus" or "box truck."
- In Texas, the Beaumont Police Department ran six searches related to two vehicles on June 14, 2025, listing "KINGS DAY PROTEST" as the reason. The queries reached across 1,774 networks.
- In California, the San Bernardino County Sheriff's Office ran a single search for a vehicle across 711 networks, logging "no king" as the reason.
- In Arizona, the Tempe Police Department made three searches for "ATL No Kings Protest" on June 15, 2025 searching through 425 networks. "ATL" is police code for "attempt to locate." The agency appears to not have been looking for a particular plate, but for any red vehicle on the road during a certain time window.
But the No Kings protests weren't the only demonstrations drawing law enforcement's digital dragnet in 2025.
For example:
- In Nevada's state capital, the Carson City Sheriff's Office ran three searches that correspond to the February 50501 Protests against DOGE and the Trump administration. The agency searched for two vehicles across 178 networks with "protest" as the reason.
- In Florida, the Seminole County Sheriff's Office logged "protest" for five searches that correspond to a local May Day rally.
- In Alabama, the Homewood Police Department logged four searches in early July 2025 for three vehicles with "PROTEST CASE" and "PROTEST INV." in the reason field. The searches, which probed 1,308 networks, correspond to protests against the police shooting of Jabari Peoples.
- In Texas, the Lubbock Police Department ran two searches for a Tennessee license plate on March 15 that corresponds to a rally to highlight the mental health impact of immigration policies. The searches hit 5,966 networks, with the logged reason "protest veh."
- In Michigan, Grand Rapids Police Department ran five searches that corresponded with the Stand Up and Fight Back Rally in February. The searches hit roughly 650 networks, with the reason logged as "Protest."
Some agencies have adopted policies that prohibit using ALPRs for monitoring activities protected by the First Amendment. Yet many officers probed the nationwide network with terms like "protest" without articulating an actual crime under investigation.
In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”
Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protest—not just those under suspicion.
Border Patrol's Expanding ReachAs U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities.
USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.”
USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics.
USBP also used the Flock Safety network to investigate a motorist who had “extended his middle finger” at Border Patrol vehicles that were transporting detainees. The motorist then allegedly drove in front of one of the vehicles and slowed down, forcing the Border Patrol vehicle to brake hard. An officer ran seven searches for his plate, citing "assault on agent" and "18 usc 111," the federal criminal statute for assaulting, resisting or impeding a federal officer. The individual was charged in federal court in early August.
USBP had access to the Flock system during a trial period in the first half of 2025, but the company says it has since paused the agency's access to the system. However, Border Patrol and other federal immigration authorities have been able to access the system’s data through local agencies who have run searches on their behalf or even lent them logins.
Targeting Animal Rights ActivistsLaw enforcement's use of Flock's ALPR network to surveil protesters isn't limited to large-scale political demonstrations. Three agencies also used the system dozens of times to specifically target activists from Direct Action Everywhere (DxE), an animal-rights organization known for using civil disobedience tactics to expose conditions at factory farms.
Delaware State Police queried the Flock national network nine times in March 2025 related to DxE actions, logging reasons such as "DxE Protest Suspect Vehicle." DxE advocates told EFF that these searches correspond to an investigation the organization undertook of a Mountaire Farms facility.
Additionally, the California Highway Patrol logged dozens of searches related to a "DXE Operation" throughout the day on May 27, 2025. The organization says this corresponds with an annual convening in California that typically ends in a direct action. Participants leave the event early in the morning, then drive across the state to a predetermined but previously undisclosed protest site. Also in May, the Merced County Sheriff's Office in California logged two searches related to "DXE activity."
As an organization engaged in direct activism, DxE has experienced criminal prosecution for its activities, and so the organization told EFF they were not surprised to learn they are under scrutiny from law enforcement, particularly considering how industrial farmers have collected and distributed their own intelligence to police.
The targeting of DxE activists reveals how ALPR surveillance extends beyond conventional and large-scale political protests to target groups engaged in activism that challenges powerful industries. For animal-rights activists, the knowledge that their vehicles are being tracked through a national surveillance network undeniably creates a chilling effect on their ability to organize and demonstrate.
Fighting Back Against ALPRALPR systems are designed to capture information on every vehicle that passes within view. That means they don't just capture data on "criminals" but on everyone, all the time—and that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it.
Our analysis only includes data where agencies explicitly mentioned protests or related terms in the "reason" field when documenting their search. It's likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like "investigation," "suspect," and "query" in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches.
For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.
For local officials, this should serve as another example of how systems marketed as protecting your community may actually threaten the values your communities hold most dear. The best way to protect people is to shut down these camera networks.
Everyone should have the right to speak up against injustice without ending up in a database.
Faces of MIT: Brian Hanna
Brian Hanna, operations manager of MIT Venture Mentoring Service (VMS), connects skilled volunteer mentors with MIT entrepreneurs looking to launch, expand, and enhance their vision.
MIT VMS is a free service, supporting innovation across the Institute, available to all current MIT students, staff members, faculty members, or alums of a degree-granting program living in the Greater Boston area. If a community member has an idea that they’d like help developing, Hanna and his team will match them with a team of mentors who can provide practical, as-needed expertise and knowledge to guide your venture.
VMS is part of the MIT ecosystem for entrepreneurs. VMS mentors are selected for their experience in areas relevant to entrepreneurs’ needs and assist with a range of business challenges, including marketing, finance, and product development. As the program celebrates its 25th anniversary of serving MIT’s entrepreneurial community, it has supported more than 3,500 ventures and mentored over 4,800 participants.
When Hanna began working at VMS in 2023, he was new to the program but not to the Institute. Prior to joining VMS, he served as the employer relations coordinator in Career Advising and Professional Development (CAPD), where he worked with companies interested in recruiting MIT talent. His responsibilities included organizing career fairs, scheduling interviews, and building relationships with various local employers. After two years at CAPD, Hanna transitioned to the role of center coordinator at the McGovern Institute for Brain Research. While Hanna does not claim to be a neuroscientist, his organizational skills proved valuable as he supported six different research centers at McGovern, with research ranging from autism to bionics.
As the VMS operations manager, Hanna supervises staff members who run events and boot camps and schedule an average of 50 mentoring sessions a month. Whether it’s a first-time entrepreneur who comes up with an idea on their morning commute or an industry veteran with licensing and a patent in place, Hanna strategically matches them with mentors who can help them build their skill set and grow their business. Hanna also provides oversight to over 200 volunteer VMS mentors, half of which are MIT alumni.
In addition to processing all incoming applications (about 25 per month), Hanna also oversees a monthly mentor meeting centered around strengthening the VMS mentor community. During the meeting, the VMS team shares announcements, discusses upcoming events, hosts guest speakers, and invites a group of current ventures to give four-minute pitches for additional advice. These pitches allow mentees to receive input from the entire mentor network, rather than just their mentor team.
The relationship between mentees, mentors, and VMS does not have an expiration date. Hanna notes that a saying in the office is, “we are VMS for life.” This rings true, as some ventures and mentors have been a part of the program for most of its 25-year existence.
When a mentee is ready to meet with their mentors for the first time, VMS aims to schedule an in-person meeting to create a strong relationship. After that, the program embraces the flexibility of meeting via Zoom to help make scheduling easier. One of the most valuable resources outside of the mentoring sessions is the theme-specific boot camps sprinkled throughout the year. These sessions are four- or five-hour events led by mentors who cover topics such as marketing, business-to-business sales, or building an IP portfolio. They serve as crash courses where mentees can learn the basics of important aspects of entrepreneurship. Another resource offered to active mentees is office hours with experts in areas such as human resources, legal, and accounting.
In December, VMS will celebrate its 25th anniversary with an event honoring current and former mentors. The event will look back on 25 years of impact and look ahead to the future of the program.
Soundbytes
Q: Do you have an MIT memory or project that brings you pride?
Hanna: At the McGovern Institute, I was part of a team that worked on the first board meeting and launch event for the K. Lisa Yang Center for Bionics, which was an incredible experience. It was a brand-new research center led by world-class researchers and innovators. Since it was the first board meeting it was a big deal, so we planned to host a celebration tied to the meeting. There were a lot of moving parts and collaboration between faculty, researchers, staff, board members, and vendors. It took place at the tail end of Covid, which was an added challenge. With such an important event you don’t want to let anyone down. In the end, it worked out really well, was a fun event to be a part of, and something I never thought I would be able to do.
Q: How would you describe the community at MIT?
Hanna: Very welcoming. I was intimidated when I first interviewed at MIT because, as someone who isn’t a STEM person, MIT was never on my radar. Then a job came up, and I thought, I'll apply for that. When I started working here, there was always someone available to provide assistance and point me in the right direction. Everyone is incredibly talented and innovative — not just in creating things, but also in problem-solving and finding ways to collaborate. Each time I changed roles, everyone I met was down-to-earth, kind, and extremely helpful during onboarding. It was never sink or swim — it was always nurturing.
Q: What advice would you give to a new staff member at MIT?
Hanna: Make connections with people outside of your immediate network. Get involved in the community by attending events or reaching out to people. For both jobs which I held after working at CAPD, I reached out to the hiring manager when I saw the job posting and asked a couple clarifying questions. Also, it’s important to know that everything is numbered; the buildings, the majors, everything.
The Trump Administration’s Order on AI Is Deeply Misguided
Widespread news reports indicate that President Donald Trump’s administration has prepared an executive order to punish states that have passed laws attempting to address harms from artificial intelligence (AI) systems. According to a draft published by news outlets, this order would direct federal agencies to bring legal challenges to state AI regulations that the administration deems “onerous,” to restrict funding to those states that have these laws, and to adopt new federal law that overrides state AI laws.
This approach is deeply misguided.
As we’ve said before, the fact that states are regulating AI is often a good thing. Left unchecked, company and government use of automated decision-making systems in areas such as housing, health care, law enforcement, and employment have already caused discriminatory outcomes based on gender, race, and other protected statuses.
While state AI laws have not been perfect, they are genuine attempts to address harms that people across the country face from certain uses of AI systems right now. Given the tone of the Trump Administration’s draft order, it seems clear that the preemptive federal legislation backed by this administration will not stop ways that automated decision making systems can result in discriminatory decisions.
For example, a copy of the draft order published by Politico specifically names the Colorado AI Act as an example of supposedly “onerous” legislation. As we said in our analysis of Colorado’s law, it is a limited but crucial step—one that needs to be strengthened to protect people more meaningfully from AI harms. It is possible to guard against harms and support innovation and expression. Ignoring the harms that these systems can cause when used in discriminatory ways is not the way to do that.
Again: stopping states from acting on AI will stop progress. Proposals such as the executive order, or efforts to put a broad moratorium on state AI laws into the National Defense Authorization Act (NDAA), will hurt us all. Companies that produce AI and automated decision-making software have spent millions in state capitals and in Congress to slow or roll back legal protections regulating artificial intelligence. If reports about the Trump administration’s executive order are true, those efforts are about to get a supercharged ally in the federal government.
And all of us will pay the price.
EFF Demands Answers About ICE-Spotting App Takedowns
SAN FRANCISCO – The Electronic Frontier Foundation (EFF) sued the departments of Justice (DOJ) and Homeland Security (DHS) today to uncover information about the federal government demanding that tech companies remove apps that document immigration enforcement activities in communities throughout the country.
Tech platforms took down several such apps (including ICE Block, Red Dot, and DeICER) and webpages (including ICE Sighting-Chicagoland) following communications with federal officials this year, raising important questions about government coercion to restrict protected First Amendment activity.
"We're filing this lawsuit to find out just what the government told tech companies," said EFF Staff Attorney F. Mario Trujillo. "Getting these records will be critical to determining whether federal officials crossed the line into unconstitutional coercion and censorship of protected speech."
In October, Apple removed ICEBlock, an app that allows users to report Immigration and Customs Enforcement (ICE) activity in their area, from its App Store. Attorney General Pamela Bondi publicly took credit for the takedown, telling reporters, “We reached out to Apple today demanding they remove the ICEBlock app from their App Store—and Apple did so.” In the days that followed, Apple removed several similar apps from the App Store. Google and Meta removed similar apps and webpages from platforms they own as well. Bondi vowed to “continue engaging tech companies” on the issue.
People have a protected First Amendment right to document and share information about law enforcement activities performed in public. If government officials coerce third parties into suppressing protected activity, this can be unconstitutional, as the government cannot do indirectly what it is barred from doing directly.
Last month, EFF submitted Freedom of Information Act (FOIA) requests to the DOJ, DHS and its component agencies ICE and Customs and Border Protection. The requests sought records and communications about agency demands that technology companies remove apps and pages that document immigration enforcement activities. So far, none of the agencies have provided these records. EFF's FOIA lawsuit demands their release.
For the complaint: https://www.eff.org/document/complaint-eff-v-doj-dhs-ice-tracking-apps
For more about the litigation: https://www.eff.org/cases/eff-v-doj-dhs-ice-tracking-apps
Tags: ICEContact: F. Mario TrujilloStaff Attorneymario@eff.orgScam USPS and E-Z Pass Texts and Websites
Google has filed a complaint in court that details the scam:
In a complaint filed Wednesday, the tech giant accused “a cybercriminal group in China” of selling “phishing for dummies” kits. The kits help unsavvy fraudsters easily “execute a large-scale phishing campaign,” tricking hordes of unsuspecting people into “disclosing sensitive information like passwords, credit card numbers, or banking information, often by impersonating well-known brands, government agencies, or even people the victim knows.”
These branded “Lighthouse” kits offer two versions of software, depending on whether bad actors want to launch SMS and e-commerce scams. “Members may subscribe to weekly, monthly, seasonal, annual, or permanent licenses,” Google alleged. Kits include “hundreds of templates for fake websites, domain set-up tools for those fake websites, and other features designed to dupe victims into believing they are entering sensitive information on a legitimate website.”...
EPA falls behind schedule for repealing endangerment finding
‘Drowning under paper’: Vulnerable countries push to slice red tape for climate aid
Gas exports may increase Americans’ heating bills, EIA says
Rising seas threaten thousands of hazardous US facilities
Alito is urged to back out of Louisiana coastal erosion case
Senate upholds Trump administration methane rule
New York Democrats split on climate law
Turkey to host 2026 climate summit, in defeat for Australia
EU missing from COP30 push to drop fossil fuels
EU strains to defend carbon levy as trade tensions engulf COP30
Rail project raises questions about Brazil’s effort to protect the Amazon
South Africa to urge rich nations to do more against climate change at G20
Misalignment between objective and perceived heat risks
Nature Climate Change, Published online: 20 November 2025; doi:10.1038/s41558-025-02505-9
Objective assessments indicate that extreme heat is increasing health risks; however, many of the most exposed populations do not perceive extreme heat as risky. This misperception may undermine public awareness of the need for effective cooling strategies, leaving a dangerous blind spot in adaptation and protection.Scientists get a first look at the innermost region of a white dwarf system
Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.
Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.
The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.
What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.
The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.
“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”
Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.
A high-energy fountain
All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.
The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.
The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.
“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.
An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.
In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.
An innermost picture
By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.
“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”
Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.
“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.
The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.
“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”
The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.
“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”
This research was supported, in part, by NASA.
The cost of thinking
Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.
A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these — and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report today in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.
The researchers, who were led by Evelina Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says. “The fact that there’s some convergence is really quite striking.”
Reasoning models
Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well — and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.
“Up until recently, I was among the people saying, ‘These models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” Fedorenko says. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”
Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoc in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”
To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”
Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem-solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before — but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.
The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail, too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.
Time versus tokens
This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains. “It’s as if they were talking to themselves.”
Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it — and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.
Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.
De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.
The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.
