Feed aggregator
No Face, No Case: California’s S.B. 627 Demands Cops Show Their Faces
Across the country, people are collecting and sharing footage of masked law enforcement officers from both federal and local agencies deputized to do so-called immigration enforcement: arresting civilians, in some cases violently and/or warrantlessly. That footage is part of a long tradition of recording law enforcement during their operations to ensure some level of accountability if people observe misconduct and/or unconstitutional practices. However, as essential as recording police can be in proving allegations of misconduct, the footage is rendered far less useful when officers conceal their badges and/or faces. Further, lawyers, journalists, and activists cannot then identify officers in public records requests for body-worn camera footage to view the interaction from the officers’ point of view.
In response to these growing concerns, California has introduced S.B. 627 to prohibit law enforcement from covering their faces during these kinds of public encounters. This builds on legislation (in California and some other states and municipalities) that requires police, for example, “to wear a badge, nameplate, or other device which bears clearly on its face the identification number or name of the officer.” Similarly, police reform legislation passed in 2018 requires greater transparency by opening individual personnel files of law enforcement to public scrutiny when there are use of force cases or allegations of violent misconduct.
But in the case of ICE detentions in 2025, federal and federally deputized officers are not only covering up their badges—they're covering their faces as well. This bill would offer an important tool to prevent this practice, and to ensure that civilians who record the police can actually determine the identity of the officers they’re recording, in case further investigation is warranted. The legislation explicitly includes “any officer or anyone acting on behalf of a local, state, or federal law enforcement agency.”
This is a necessary move. The right to record police, and to hold government actors accountable for their actions, requires that we know who the government actors are in the first place. The new legislation seeks to cover federal officers in addition to state and local officials, protecting Californians from otherwise unaccountable law enforcement activity.
As EFF has stood up for the right to record police, we also stand up for the right to be able to identify officers in those recordings. We have submitted a letter to the state legislature to that effect. California should pass S.B. 627, and more states should follow suit to ensure that the right to record remains intact.
A bionic knee integrated into tissue can restore natural movement
MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.
Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.
Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.
“A prosthesis that's tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.
Better control
Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.
During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.
Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.
In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.
In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.
To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.
This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.
“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”
In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.
The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.
“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”
A sense of embodiment
In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.
Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.
The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.
“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”
The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.
The research was funded by the Yang Tan Collective and DARPA.
Axon’s Draft One is Designed to Defy Transparency
Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.
Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.
You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here.
Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.
For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system. Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way.
Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.
Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI.
"We love having new toys until the public gets wind of them."
One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.
But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used.
So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk.
The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports.
"We love having new toys until the public gets wind of them," the administrator wrote.
No Record of Who Wrote WhatThe first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like:
- Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible?
- How often are officers finding and correcting errors made by the AI, and are there patterns to these errors?
- If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer?
- Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?
"So we don’t store the original draft and that’s by design..."
Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.
Axon is open about not maintaining these records, at least when it markets directly to law enforcement.
In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added):
“So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”
To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit).
Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.
The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy?
It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out.
One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.
To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult.
What the Audit Trail Actually Looks LikeYou may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means.
The first thing to note is that, based on our review of the documentation, there appears to be no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute).
This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often.
Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:
- A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
- A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings.
This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs.
An example of Draft One usage in an audit log.
An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time.
But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.
Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us:
"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."
Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.
They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."
We have requested further clarification from Axon, but they have yet to respond.
However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself.
In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports.
An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."
But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future.
What is Being Done About Draft One
The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill, and any law enforcement usage would be unlawful.
Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savings, with others agencies extolling its virtues (although their data also shows that results vary even within the department).
In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards.
Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft.
In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says
We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.
We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product.
ConclusionPolice should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system.
EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.
EFF's Guide to Getting Records About Axon's Draft One AI-Generated Police Reports
The moment Axon Enterprise announced a new product, Draft One, that would allow law enforcement officers to use artificial intelligence to automatically generate incident report narratives based on body-worn camera audio, everyone in the police accountability community immediately started asking the same questions.
What do AI-generated police reports look like? What kind of paper trail does this system leave? How do we get a hold of documentation using public records laws?
Unfortunately, obtaining these records isn't easy. In many cases, it's straight-up impossible.
Read our full report on how Axon's Draft One defies transparency expectations by design here.
In some jurisdictions, the documents are walled off behind government-created barriers. For example, California fully exempts police narrative reports from public disclosure, while other states charge fees to access individual reports that become astronomical if you want to analyze the output in bulk. Then there are technical barriers: Axon's product itself does not allow agencies to isolate reports that contain an AI-generated narrative, although an agency can voluntarily institute measures to make them searchable by a keyword.
This spring, EFF tested out different public records request templates and sent them to dozens of law enforcement agencies we believed are using Draft One.
We asked each agency for the Draft One-generated police reports themselves, knowing that in most cases this would be a long shot. We also dug into Axon's user manuals to figure out what kind of logs are generated and how to carefully phrase our public records request to get them. We asked for the current system settings for Draft One, since there are a lot of levers police administrators can pull that drastically change how and when officers can use the software. We also requested the standard records that we usually ask for when researching new technologies: procurement documents, agreements, training manuals, policies, and emails with vendors.
Like all mass public records campaigns, the results were… mixed. Some agencies were refreshingly open with their records. Others assessed us records fees well outside the usual range for a non-profit organization.
What we learned about the process is worth sharing. Axon has thousands of clients nationwide that use its Tasers, body-worn cameras and bundles of surveillance equipment, and the company is using those existing relationships to heavily promote Draft One. We expect many more cities to deploy the technology over the next few years. Watchdogging police use of AI will require a nationwide effort by journalists, advocacy organizations and community volunteers.
Below we’re sharing some sample language you can use in your own public records requests about Draft One — but be warned. It’s likely that the more you include, the longer it might take and the higher the fees will get. The template language and our suggestions for filing public records requests are not legal advice. If you have specific questions about a public records request you filed, consult a lawyer.
1. Police ReportsLanguage to try in your public records request:
- All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received. If your agency requires a Draft One disclosure in the text of the message, you can use "Draft One" as a keyword search term.
Or
- The [NUMBER] most recent police report narratives that were generated using Axon Draft One between [DATE IN THE LAST FEW WEEKS] and the date this request is received.
If you are curious about a particular officer's Draft One usage, you can also ask for their reports specifically. However it may be helpful to obtain their usage log first (see section 2).
- All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated by [OFFICER NAME] using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received.
We suggest using weeks, not months, because the sheer number of reports can get costly very quickly.
As an add-on to Axon's evidence and records management platforms, Draft One uses ChatGPT to convert audio taken from Axon body-worn cameras into the so-called first draft of the narrative portion of a police report.
When Politico surveyed seven agencies in September 2024, reporter Alfred Ng found that police administrators did not have the technical ability to identify which reports contained AI-generated language. As Ng reported. “There is no way for us to search for these on our end,” a Lafayette, IN police captain told Ng. Six months later, EFF received the same no-can-do response from the Lafayette Police Department.
Although Lafayette Police could not create a list on their own, it turns out that Axon's engineers can generate these reports for police if asked. When the Frederick Police Department in Colorado received a similar request from Ng, the agency contacted Axon for help. The company does internally track reports written with Draft One and was able to provide a spreadsheet of Draft One reports (.csv) and even provided Frederick Police with computer code to allow the agency to create similar lists in the future. Axon told them they would look at making this a feature in the future, but that appears not to have happened yet.
But we also struck gold with two agencies: the Palm Beach County Sheriff's Office (PBCSO) in Florida and the Lake Havasu City Police Department in Arizona. In both cases, the agencies require officers to include a disclosure that they used Draft One at the end of the police narrative. Here's a slide from the Palm Beach County Sheriff's Draft One training:
And here's the boilerplate disclosure:
I acknowledge this report was generated from a digital recording using Draft One by Axon. I further acknowledge that I have I reviewed the report, made any necessary edits, and believe it to be an accurate representation of my recollection of the reported events. I am willing to testify to the accuracy of this report.
As small a gesture as it may seem, that disclosure makes all the difference when it comes to responding to a public records request. Lafayette Police could not isolate the reports because its policy does not require the disclosure. A Frederick Police Department sergeant noted in an email to Axon that they could isolate reports when the auto-disclosure was turned on, but not after they decided to turn it off. This year, Utah legislators introduced a bill to require this kind of disclosure on AI-generated reports.
As the PBCSO records manager told us: "We are able to do a keyword and a timeframe search. I used the words ‘Draft One’ and the system generated all the Draft One reports for that timeframe." In fact, in Palm Beach County and Lake Havasu, records administrators dug up huge numbers of records. But, once we saw the estimated price tag, we ultimately narrowed our request to just 10 reports.
Here is an example of a report from PBCSO, which only allows Draft One to be used in incidents that don't involve a criminal charge. As a result, many of the reports were related to mental health or domestic dispute responses.
A machine readable text version of this report is available here. Full version here.
And here is an example from the Lake Havasu City Police Department, whose clerk was kind enough to provide us with a diverse sample of requests.
A machine readable text version of this report is available here. Full version here.
EFF redacted some of these records to protect the identity of members of the public who were captured on body-worn cameras. Black-bar redactions were made by the agencies, while bars with X's were made by us. You can view all the examples we received below:
- 10 Axon Draft One-assisted reports from the Palm Beach County Sheriff's Office
- 10 Axon Draft One-assisted reports from the Lake Havasu Police Department
We also received police reports (perhaps unintentionally) from two other agencies that were contained as email attachments in response to another part of our request (see section 7).
2. Audit LogsLanguage to try in your public records request:
Note: You can save time by determining in advance whether the agency uses Axon Evidence or Axon Records and Standards, then choose the applicable option below. If you don't know, you can always request both.
Audit logs from Axon Evidence
- Audit logs for the period December 1, 2024 through the date this request is received, for the 10 most recently active users.
According to Axon's online user manual, through Axon Evidence agencies are able to view audit logs of individual officers to ascertain whether they have requested the use of Draft One, signed a Draft One liability disclosure or changed Draft One settings (https://my.axon.com/s/article/View-the-audit-trail-in-Axon-Evidence-Draft-One?language=en_US). In order to obtain these audit logs, you may follow the instructions on this Axon page: https://my.axon.com/s/article/Viewing-a-user-audit-trail?language=en_US.
In order to produce a list of the 10 most recent active users, you may click the arrow next to "Last Active" then select the most 10 recent. The [...] menu item allows you to export the audit log. We would prefer these audits as .csv files if possible.
Alternatively, if you know the names of specific officers, you can name them rather than selecting the most recent.
Or
Audit logs from Axon Records and Axon Standards
- According to Axon's online user manual, through Axon Records and Standards, agencies are able to view audit logs of individual officers to ascertain whether they have requested a Draft One draft or signed a Draft One liability disclosure. https://my.axon.com/s/article/View-the-audit-log-in-Axon-Records-and-Standards-Draft-One?language=en_US
To obtain these logs using the Axon Records Audit Tool, follow these instructions: https://my.axon.com/s/article/Audit-Log-Tool-Axon-Records?language=en_US
a. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "M" into the audit tool. If no user comes up with M, please try "Mi."
b. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "J" into the audit tool. If no user comes up with J, please try "Jo."
c. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "S" into the audit tool. If no user comes up with S, please try "Sa."
You could also tell the agency you are only interested in Draft One related items, which may save the agency time in reviewing and redacting the documents.
Generally, many of the basic actions a police officer takes using Axon technology — whether it's signing in, changing a password, accessing evidence or uploading BWC footage — is logged in the system.
This also includes some actions when an officer uses Draft One. However, the system only logs three types of activities: requesting that Draft One generate a report, signing a Draft One liability disclosure, or changing Draft One's settings. And these reports are one of the only ways to identify which reports were written with AI and how widely the technology is used.
Unfortunately, Axon appears to have designed its system so that administrators cannot create a list of all Draft One activities taken by the entire police force. Instead, all they can do is view an individual officer's audit log to see when they used Draft One or look at the log for a particular piece of evidence to see if Draft One was used. These can be exported as a spreadsheet or a PDF. (When the Frederick Police Department asked Axon how to create a list of Draft One reports, the Axon rep told them that feature wasn't available and they would have to follow the above method. "To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy," Axon wrote in August 2024, then suggested it might come up with a long-term solution. We emailed Axon back in March to see if this was still the case, but they did not provide a response.)
Here's an excerpt from a PDF version from the Bishop Police Department in California:
Here are some additional audit log examples:
- Campbell Police Department, California (XLSX)
- Lafayette Police Department, Indiana (XLSX)
- Bishop Police Department, California (PDF)
- Pasco Police Department, Washington (CSV)
If you know the name of an individual officer, you can try to request their audit logs to see if they used Draft One. Since we didn't have a particular officer in mind, we had to get creative.
An agency may manage their documents with one of a few different Axon offerings: Axon Evidence, Axon Records, or Axon Standards. The process for requesting records is slightly different depending on which one is used. We dug through the user manuals and came up with a few ways to export a random(ish) example. We also linked the manuals and gave clear instructions for the records officers.
With Axon Evidence, an administrator can simply sort the system to show the 10 most recent users then export their usage logs. With Axon Records/Standard, the administrator has to start typing in a name and then it auto-populates with suggestions. So, we ask them to export the audit logs for the first few users who came up when they typed the letters M, J, and S into the search (since those letters are common at the beginning of names).
Unfortunately, this method is a little bit of a gamble. Many officers still aren't using Draft One, so you may end up with hundreds of pages of logs that don't mention Draft One at all (as was the case with the records we received from Monroe County, NY).
3. SettingsLanguage to try in your public records request:
- A copy of all settings and configurations made by this agency in its use of the Axon Draft One platform, including all opt-in features that the department has elected to use and the incident types for which the software can be used. A screen capture of these settings will suffice.
We knew the Draft One system offers department managers the option to customize how it can be used, including the categories of crime for which reports can be generated and whether or not there is a disclaimer automatically added to the bottom of the report disclosing the use of AI in its generation. So we asked for a copy of these settings and configurations. In some cases, agencies claimed this was exempted from their public records laws, while other agencies did provide the information. Here is an example from the Campbell Police Department in California:
(It's worth noting that while Campbell does require each police report to contain a disclosure that Draft One was used, the California Public Records Act exempts police reports from being released.)
Examples of settings:
- Bishop Police Department, California
- Campbell Police Department, California
- Pasco Police Department, Washington
Language to try in your public records request:
- All contracts, memorandums of understanding, and any other written agreements between this agency and Axon related to the use of Draft One, Narrative Assistant, or any other AI-assisted report generation tool provided by Axon. Responsive records include all associated amendments, exhibits, and supplemental and supporting documentation, as well as all relevant terms of use, licensing agreements, and any other guiding materials. If access to Draft One or similar tools is being provided via an existing contract or through an informal agreement, please provide the relevant contract or the relevant communication or agreement that facilitated the access. This includes all agreements, both formal and informal, including all trial access, even if that access does not or did not involve financial obligations.
It can be helpful to know how much Draft One costs, how many user licenses the agency paid for, and what the terms of the agreement are. That information is often contained in records related to the contracting process. Agencies will often provide these records with minimal pushback or redactions. Many of these records may already be online, so a requester can save time and effort by looking around first. These are often found in city council agenda packets. Also, law enforcement agencies often will bump these requests to the city or county clerk instead.
Here's an excerpt from the Monroe County Sheriff's Office in New York:
These kinds of procurement records describe the nature and cost of the relationship between the police department and the company. They can be very helpful for understanding how much a continuing service subscription will cost and what else was bundled in as part of the purchase. Draft One, so far, is often accessed as an additional feature along with other Axon products.
We received too many documents to list them all, but here is a representative example of some of the other documents you might receive, courtesy of the Dacono Police Department in Colorado.
5. Training, Manuals and PoliciesAll training materials relevant to Draft One or Axon Narrative Assistant generated by this agency, including but not limited to:
- All training material provided by Axon to this agency regarding its use of Draft One;
- All internal training materials regarding the use of Draft One;
- All user manuals, other guidance materials, help documents, or related materials;
- Guides, safety tests, and other supplementary material that mention Draft One provided by Axon from January 1, 2024 and the date this request is received;
- Any and all policies and general orders related to the use of Draft One, the Narrative Assistant, or any other AI-assisted report generation offerings provided by Axon (An example of one such policy can be found here: https://cdn.muckrock.com/foia_files/2024/11/26/608_Computer_Software_and_Transcription-Assisted_Report_Generation.pdf).
In addition to seeing when Draft One was used and how it was acquired, it can be helpful to know what rules officers must follow, what directions they're given for using it, and what features are available to users. That's where manuals, policies and training materials come in handy.
User manuals are typically going to come from Axon itself. In general, if you can get your hands on one, this will help you to better understand the mechanisms of the system, and it will help you align the way you craft your request with the way the system actually works. Luckily, Axon has published many of the materials online and we've already obtained the user manual from multiple agencies. However, Axon does update the manual from time to time, so it can be helpful to know which version the agency is working from.
Here's one from December 2024:
Policies are internal police department guidance for using Draft One. Not all agencies have developed a policy, but the ones they do have may reveal useful information, such as other records you might be able to request. Here are some examples:
- Palm Beach County Sheriff's Office General Order 563 - Axon Draft One
- Colorado Springs Police Department General Order 1904 - Use of Specialized Axon System
- Lake Havasu Police Department Policy 342 - Report Preparation
- Campbell Police Department Policy 344 - Report Preparation
- Lafayette Police Department Policy 608 - Computer Software and Transcription-Assisted Report Generation
Training and user manuals also might reveal crucial information about how the technology is used. In some cases these documents are provided by Axon to the customer. These records may illuminate the specific direction that departments are emphasizing about using the product.
Here are a few examples of training presentations:
- Colorado Springs Police Department 2025-Q1-Draft-One-Training
- Palm Beach County Sheriff's Office - Axon Draft One Training Material
- Pasco Police Department - Axon Draft One Presentation
Language to try in your public records request:
- All final reports, evaluations, reports, or other documentation concluding or summarizing a trial or evaluation period or pilot project
Many departments are getting access to Draft One as part of a trial or pilot program. The outcome of those experiments with the product can be eye-opening or eyebrow-raising. There might also be additional data or a formal report that reviews what the department was hoping to get from the experience, how they structured any evaluation of its time-saving value for the department, and other details about how officers did or did not use Draft One.
Here are some examples we received:
- The Effect of Artificial Intelligence has on Time Spent Writing Reports: An analysis of data from the Lake Havasu City Police Department
- Colorado Springs Police Department: Spreadsheets measuring amount of time officers spent writing reports versus using Draft One (zip)
Language to try in your public records request:
• All communications sent or received by any representative of this agency with individuals representing Axon referencing the following term, including emails and attachments:
- Draft One
- Narrative Assistant
- AI-generated report
• All communications sent to or received by any representative of this agency with each of the following email addresses, including attachments:
- [INSERT EMAIL ADDRESSES]
Note: We are not including the specific email addresses here that we used, since they are subject to change when employees are hired, promoted, or find new gigs. However, you can find the emails we used in our requests on MuckRock.
The communications we wanted were primarily the emails between Axon and the law enforcement agency. As you can imagine, these emails could reveal the back-and-forth between the company and its potential customers, and these conversations could include the marketing pitch made to the department, the questions and problems police may have had with it, and more.
In some cases, these emails reveal cozy relationships between salespeople and law enforcement officials. Take, for example, this email exchange between the Dickinson Police Department and an Axon rep:
Or this email between a Frederick Police Department sergeant and an Axon representative, in which a sergeant describes himself as "doing sales" for Axon by providing demos to other agencies.
A machine readable text version of this email is available here.
Emails like this also show what other agencies are considering using Draft One in the future. For example, in this email we received from the Campbell Police Department shows that the San Francisco Police Department was testing Draft One as early as October 2024 (the usage was confirmed in June 2025 by the San Francisco Standard).
A machine readable text version of this email is available here.
Your mileage will certainly vary for these email requests, in part because the ability for agencies to search their communications can vary. Some agencies can search by a keyword like "Draft One” or "Axon" and while other agencies can only search by the specific email address.
Communications can be one of the more expensive parts of the request. We've found that adding a date range and key terms or email addresses has helped limit these costs and made our requests a bit clearer for the agency. Axon sends a lot of automated emails to its subscribers, so the agency may quote a large fee for hundreds or thousands of emails that aren't particularly interesting. Many agencies respond positively if a requester reaches out to say they're open to narrowing or focusing their request.
Asking for Body-Worn Camera FootageOne of the big questions is how do the Draft One-generated reports compare to the BWC audio the narrative is based on? Are the reports accurate? Are they twisting people's words? Does Draft One hallucinate?
Finding these answers requires both obtaining the police report and the footage of the incident that was fed into the system. The laws and process for obtaining BWC footage vary dramatically state to state, and even department to department. Depending on where you live, it can also get expensive very quickly, since some states allow agencies to charge you not only for the footage but the time it takes to redact the footage. So before requesting footage, read up on your state’s public access laws or consult a lawyer.
However, once you have a copy of a Draft One report, you should have enough information to file a follow-up request for the BWC footage.
So far, EFF has not requested BWC footage. In addition to the aforementioned financial and legal hurdles, the footage can implicate both individual privacy and transparency regarding police activity. As an organization that advocates for both, we want to make sure we get this balance right. Afterall, BWCs are a surveillance technology that collects intelligence on suspects, victims, witnesses, and random passersby. When the Palm Beach County Sheriff's Office gave us an AI-generated account of a teenager being hospitalized for suicidal ideations, we of course felt that the minor's privacy outweighed our interest in evaluating the AI. But do we feel the same way about a Draft One-generated narrative about a spring break brawl in Lake Havasu?
Ultimately, we may try to obtain a limited amount of BWC footage, but we also recognize that we shouldn't make the public wait while we work it out for ourselves. Accountability requires different methods, different expertise, and different interests, and with this guide we hope to not only shine light on Draft One, but to provide the schematics for others–including academics, journalists, and local advocates–to build their own spotlights to expose police use of this problematic technology.
Where to Find More DocsDespite the variation in how agencies responded, we did have some requests that proved fruitful. You can find these requests and the documents we got via the linked police department names below.
Please note that we filed two different types of requests, so not all the elements above may be represented in each link.
Via Document Cloud (PDFs)
- Dacono Police Department, Colorado
- Mount Vernon Police Department, Illinois
- Monroe County Sheriff's Office, New York
- Joliet Police Department, Illinois
- Elgin Police Department, Illinois
- Bishop Police Department, California
- Palm Beach County Sheriff's Office
- Lake Havasu City Police Department, Arizona
- Dickinson Police Department, ND
- Firestone Police Department, Colo.
- Frederick Police Department (DocumentCloud and Google Drive. Frederick provided us a large number of emails in a difficult-to-manage PST format. We unpacked that PST into individual EML files. Because the agency did a keyword search, you may find that some of the emails are not relevant to the issue, but do include the term "draft one." To reduce the noise, we removed emails that were generated prior to the existence of Draft One. We also removed emails that contained police reports with PII. We redacted those reports and uploaded them independently. While Document Cloud allowed us to convert EML files to PDF files, it did not allow us to keep the relationship between the emails and attachments. You can find those records with the relationships somewhat maintained in Google Drive.)
Via MuckRock (Assorted filetypes)
- Pasco Police Department, Washington (Part 1, Part 2)
- Colorado Springs Police Department, Colorado
- Fort Collins Police Department, Colorado
- Campbell Police Department, California (Part 1, Part 2)
- Lafayette Police Department, Indiana
- East Palo Alto Police Department, California
Special credit goes to EFF Research Assistant Jesse Cabrera for public records request coordination.
Using Signal Groups for Activism
Good tutorial by Micah Lee. It includes some nonobvious use cases.
FEMA leader is a no-show after deadly Texas flooding
Lack of AC kills New Yorkers every year
10 Northeast states agree to triple rate of emission cuts
Heat waves endanger data centers
California regulator eyes replacement for EV rules revoked by Trump
Pope Leo prays for world to recognize urgency of climate crisis
Center-right Parliament members block effort to blunt far-right control over climate law
Singapore official says climate action sees most uncertainty in a decade
It's EFF's 35th Anniversary (And We're Just Getting Started)
Today we celebrate 35 years of EFF bearing the torch for digital rights against the darkness of the world, and I couldn’t be prouder. EFF was founded at a time when governments were hostile toward technology and clueless about how it would shape your life. While threats from state and commercial forces grew alongside the internet, so too did EFF’s expertise. Our mission has become even larger than pushing back on government ignorance and increasingly dangerous corporate power. In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process. It's about guarding the security of our society, along with our loved ones and the vulnerable communities around us.
With the support of EFF’s members, we use law, technology, and activism to create the conditions for human rights and civil liberties to flourish, and for repression to fail.
In this moment, we're doing our part to preserve the necessities of democracy: privacy, free expression, and due process.
EFF believes in commonsense freedom and fairness. We’re working toward an environment where your technology works the way you want it to; you can move through the world without the threat of surveillance; and you can have private conversations with the people you care about and support the causes you believe in. We’ve won many fights for encryption, free expression, innovation, and your personal data throughout the years. The opposition is tough, but—with a powerful vision for a better future and you on our side—EFF is formidable.
Throughout EFF’s year-long 35th Anniversary celebration, our dedicated activists, investigators, technologists, and attorneys will share the lessons from EFF’s long and rich history so that we can help overcome the obstacles ahead. Thanks to you, EFF is here to stay.
Together for the Digital FutureAs a member-supported nonprofit, everything EFF does depends on you. Donate to help fuel the fight for privacy, free expression, and a future where we protect digital freedom for everyone.
Powerful forces may try to chip away at your rights—but when we stand together, we win.
Watch Today: EFFecting Change Live
Just hours from now, join me for the 35th Anniversary edition of our EFFecting Change livestream. I’m leading this Q&A with EFF Director for International Freedom of Expression Jillian York, EFF Legislative Director Lee Tien, and Professor and EFF Board Member Yoshi Kohno. Together, we’ve seen it all and today we hope you'll join us for what’s next.
11:00 AM Pacific (check local time)
EFF supporters around the world sustain our mission to defend technology creators and users. Thank you for being a part of this community and helping it thrive.
Supporting mission-driven space innovation, for Earth and beyond
As spaceflight becomes more affordable and accessible, the story of human life in space is just beginning. Aurelia Institute wants to make sure that future benefits all of humanity — whether in space or here on Earth.
Founded by Ariel Ekblaw SM ’17, PhD ’20; Danielle DeLatte ’11; and former MIT research scientist Sana Sharma, the nonprofit institute serves as a research lab for space technology and architecture, a center for education and outreach, and a policy hub dedicated to inspiring more people to work in the space industry.
At the heart of the Aurelia Institute’s mission is a commitment to making space accessible to all people. A big part of that work involves annual microgravity flights that Ekblaw says are equal part research missions, workforce training, and inspiration for the next generation of space enthusiasts.
“We’ve done that every year,” Ekblaw says of the flights. “We now have multiple cohorts of students that connect across years. It brings together people from very different backgrounds. We’ve had artists, designers, architects, ethicists, teachers, and others fly with us. In our R&D, we are interested in space infrastructure for the public good. That’s why we’re directing our technology portfolios toward near-term, massive infrastructure projects in low-Earth orbit that benefit life on Earth.”
From the annual flights to the Institute’s self-assembling space architecture technology known as TESSERAE, much of Aurelia’s work is an extension of projects Ekblaw started as a graduate student at MIT.
“My life trajectory changed when I came to MIT,” says Ekblaw, who is still a visiting researcher at MIT. “I am incredibly grateful for the education I got in the Media Lab and the Department of Aeronautics and Astronautics. MIT is what gave me the skill, the technology, and the community to be able to spin out Aurelia and do something important in the space industry at scale.”
“MIT changes lives”
Ekblaw has always been passionate about space. As an undergraduate at Yale University, she took part in a NASA microgravity flight as part of a research project. In the first year of her PhD program at MIT, she led the launch of the Space Exploration Initiative, a cross-Institute effort to drive innovation at the frontiers of space exploration. The ongoing initiative started as a research group but soon raised enough money to conduct microgravity flights and, more recently, conduct missions to the International Space Station and the moon.
“The Media Lab was like magic in the years I was there,” Ekblaw says. “It had this sense of what we used to call ‘anti-disciplinary permission-lessness.’ You could get funding to explore really different and provocative ideas. Our mission was to democratize access to space.”
In 2016, while taking a class taught by Neri Oxman, then a professor in the Media Lab, Ekblaw got the idea for the TESSERAE Project, in which tiles autonomously self-assemble into spherical space structures.
“I was thinking about the future of human flight, and the class was a seeding moment for me,” Ekblaw says. “I realized self-assembly works OK on Earth, it works particularly well at small scales like in biology, but it generally struggles with the force of gravity once you get to larger objects. But microgravity in space was a perfect application for self-assembly.”
That semester, Ekblaw was also taking Professor Neil Gershenfeld’s class MAS.863 (How to Make (Almost) Anything), where she began building prototypes. Over the ensuing years of her PhD, subsequent versions of the TESSERAE system were tested on microgravity flights run by the Space Exploration Initiative, in a suborbital mission with the space company Blue Origin, and as part of a 30-day mission aboard the International Space Station.
“MIT changes lives,” Ekblaw says. “It completely changed my life by giving me access to real spaceflight opportunities. The capstone data for my PhD was from an International Space Station mission.”
After earning her PhD in 2020, Ekblaw decided to ask two researchers from the MIT community and the Space Exploration Initiative, Danielle DeLatte and Sana Sharma, to partner with her to further develop research projects, along with conducting space education and policy efforts. That collaboration turned into Aurelia.
“I wanted to scale the work I was doing with the Space Exploration Initiative, where we bring in students, introduce them to zero-g flights, and then some graduate to sub-orbital, and eventually flights to the International Space Station,” Ekblaw says. “What would it look like to bring that out of MIT and bring that opportunity to other students and mid-career people from all walks of life?”
Every year, Aurelia charters a microgravity flight, bringing about 25 people along to conduct 10 to 15 experiments. To date, nearly 200 people have participated in the flights across the Space Exploration Initiative and Aurelia, and more than 70 percent of those fliers have continued to pursue activities in the space industry post-flight.
Aurelia also offers open-source classes on designing research projects for microgravity environments and contributes to several education and community-building activities across academia, industry, and the arts.
In addition to those education efforts, Aurelia has continued testing and improving the TESSERAE system. In 2022, TESSERAE was brought on the first private mission to the International Space Station, where astronauts conducted tests around the system’s autonomous self-assembly, disassembly, and stability. Aurelia will return to the International Space Station in early 2026 for further testing as part of a recent grant from NASA.
The work led Aurelia to recently spin off the TESSERAE project into a separate, for-profit company. Ekblaw expects there to be more spinoffs out of Aurelia in coming years.
Designing for space, and Earth
The self-assembly work is only one project in Aurelia’s portfolio. Others are focused on designing human-scale pavilions and other habitats, including a space garden and a massive, 20-foot dome depicting the interior of space architectures in the future. This space habitat pavilion was recently deployed as part of a six-month exhibit at the Seattle Museum of Flight.
“The architectural work is asking, ‘How are we going to outfit these systems and actually make the habitats part of a life worth living?’” Ekblaw explains.
With all of its work, Aurelia’s team looks at space as a testbed to bring new technologies and ideas back to our own planet.
“When you design something for the rigors of space, you often hit on really robust technologies for Earth,” she says.
Marine heatwaves select for thermal tolerance in a reef-building coral
Nature Climate Change, Published online: 10 July 2025; doi:10.1038/s41558-025-02381-3
The authors evaluate heritable genetic variation in thermal tolerance in a common reef-building coral. They show widespread heritable genetic variation, which is strongly associated with marine heatwave-imposed selective pressure, suggesting adaptation to climate warming.Data Brokers are Selling Your Flight Information to CBP and ICE
For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.
This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.
One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrant—they’ve also been trying to hide how they know these things about us.
ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers.
More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada.
In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives.
Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution.
Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers.
At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.
The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws.
Electronic Frontier Foundation to Present Annual EFF Awards to Just Futures Law, Erie Meyer, and Software Freedom Law Center, India
SAN FRANCISCO—The Electronic Frontier Foundation (EFF) is honored to announce that Just Futures Law, Erie Meyer, and Software Freedom Law Center, India will receive the 2025 EFF Awards for their vital work in ensuring that technology supports privacy, freedom, justice, and innovation for all people.
The EFF Awards recognize specific and substantial technical, social, economic, or cultural contributions in diverse fields including journalism, art, digital access, legislation, tech development, and law.
The EFF Awards ceremony will start at 6 p.m. PT on Wednesday, Sept. 10, 2025 at the San Francisco Design Center Galleria, 101 Henry Adams St. in San Francisco. Guests can register at http://www.eff.org/effawards. The ceremony will be recorded and shared online on Sept. 12.
For the past 30 years, the EFF Awards—previously known as the Pioneer Awards—have recognized and honored key leaders in the fight for freedom and innovation online. Started when the internet was new, the Awards now reflect the fact that the online world has become both a necessity in modern life and a continually evolving set of tools for communication, organizing, creativity, and increasing human potential.
“Whether fighting the technological abuses that abet criminalization, detention, and deportation of immigrants and people of color, or working and speaking out fearlessly to protect Americans’ data privacy, or standing up for digital rights in the world’s most populous country, all of our 2025 Awards winners contribute to creating a brighter tech future for humankind,” EFF Executive Director Cindy Cohn said. “We hope that this recognition will bring even more support for each of these vital efforts.”
Just Futures Law: Leading Immigration and Surveillance Litigationjfl_icon_medium.png Just Futures Law is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States. It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power.
In the past year, Just Futures sued the Department of Homeland Security and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and sued the Trump Administration for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like Clearview AI, it has worked with Mijente to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with Grassroots Leadership to fight for the release of detained individuals under Operation Lone Star.
Erie Meyer: Protecting Americans' Privacyeriemeyer.png Erie Meyer is a Senior Fellow at the Vanderbilt Policy Accelerator where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the Georgetown Law Institute for Technology Law & Policy. She is former Chief Technologist at both the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the United States Digital Service, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at Code for America, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis.
Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she filed a declaration in federal court in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed a declaration in another case warning about using private-sector AI on government information. That same month, she testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.”
Software Freedom Law Center, India: Defending Digital Freedomssflc_logo.png Software Freedom Law Center, India is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology.
Founded in 2010 by technology lawyer and online civil liberties activist Mishi Choudhary, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks internet shutdowns and censorship incidents across India, provides digital security training, and has launched the Digital Defenders Network, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India.
To register for this event: http://www.eff.org/effawards
For past honorees: https://www.eff.org/awards/past-winners
Changing the conversation in health care
Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges.
The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.
The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program.
“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”
A chance collaboration
Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.
“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”
Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”
Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness.
Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach.
“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.
Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.
“Science has to have a heart”
LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”
The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence.
“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”
“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?”
Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous.
Changing the conversation
AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says.
‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.
“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”
“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.
Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute.
The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.
Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives.
“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”
Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools.
“AI is our chance to rewrite the rules”
While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care.
But the team isn’t daunted.
Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”
Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.
“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”
Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.
“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”
“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”
AI shapes autonomous underwater “gliders”
Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.
Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.
The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.
Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”
But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.
The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.
These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.
Giving gliding robots a lift
The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.
Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.
Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.
“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”
Going for a quick glide
While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.
They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.
A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.
To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.
Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.
Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.
As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.
Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.
Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.