Normal view

Received before yesterday

The Department of Defense Wants Less Proof its Software Works

31 October 2025 at 11:29

When Congress eventually reopens, the 2026 National Defense Authorization Act (NDAA) will be moving toward a vote. This gives us a chance to see the priorities of the Secretary of Defense and his Congressional allies when it comes to the military—and one of those priorities is buying technology, especially AI, with less of an obligation to prove it’s effective and worth the money the government will be paying for it. 

As reported by Lawfare, “This year’s defense policy bill—the National Defense Authorization Act (NDAA)—would roll back data disclosures that help the department understand the real costs of what they are buying, and testing requirements that establish whether what contractors promise is technically feasible or even suited to its needs.” This change comes amid a push from the Secretary of Defense to “Maximize Lethality” by acquiring modern software “at a speed and scale for our Warfighter.” The Senate Armed Services Committee has also expressed interest in making “significant reforms to modernize the Pentagon's budgeting and acquisition operations...to improve efficiency, unleash innovation, and modernize the budget process.”

The 2026 NDAA itself says that the “Secretary of Defense shall prioritize alternative acquisition mechanisms to accelerate development and production” of technology, including an expedited “software acquisition pathway”—a special part of the U.S. code that, if this version of the NDAA passes, will transfer powers to the Secretary of Defense to streamline the buying process and make new technology or updates to existing technology and get it operational “in a period of not more than one year from the time the process is initiated…” It also makes sure the new technology “shall not be subjected to” some of the traditional levers of oversight

All of this signals one thing: speed over due diligence. In a commercial technology landscape where companies are repeatedly found to be overselling or even deceiving people about their product’s technical capabilities—or where police departments are constantly grappling with the reality that expensive technology may not be effective at providing the solutions they’re after—it’s important that the government agency with the most expansive budget has time to test the efficacy and cost-efficiency of new technology. It’s easy for the military or police departments to listen to a tech company’s marketing department and believe their well-rehearsed sales pitch, but Congress should make sure that public money is being used wisely and in a way that is consistent with both civil liberties and human rights. 

The military and those who support its preferred budget should think twice about cutting corners before buying and deploying new technology. The Department of Defense’s posturing does not elicit confidence that the technologically-focused military of tomorrow will be equipped in a way that is effective, efficient, or transparent. 

Victory! California Requires Transparency for AI Police Reports

14 October 2025 at 13:44

California Governor Newsom has signed S.B. 524, a bill that begins the long process of regulating and imposing transparency on the growing problem of AI-written police reports. EFF supported this bill and has spent the last year vocally criticizing the companies pushing AI-generated police reports as a service. 

S.B.524 requires police to disclose, on the report, if it was used to fully or in part author a police report. Further, it bans vendors from selling or sharing the information a police agency provided to the AI. 

The bill is also significant because it required departments to retain the first draft of the report so that judges, defense attorneys, or auditors could readily see which portions of the final report were written by the officer and which portions were written by the computer. This creates major problems for police who use the most popular product in this space: Axon’s Draft One. By design, Draft One does not retain an edit log of who wrote what. Now, to stay in compliance with the law, police departments will either need Axon to change their product, or officers will have to take it upon themselves to go retain evidence of what the draft of their report looked like. Or, police can drop Axon’s Draft One all together. 

EFF will continue to monitor whether departments are complying with this state law.

After Utah, California has become the second state to pass legislation that begins to address this problem. Because of the lack of transparency surrounding how police departments buy and deploy technology, it’s often hard to know if police departments are using AI to write reports, how the generative AI chooses to translate audio to a narrative, and which portions of reports are written by AI and which parts are written by the officers. EFF has written a guide to help you file public records requests that might shed light on your police department’s use of AI to write police reports. 

It’s still unclear if products like Draft One run afoul of record retention laws, and how AI-written police reports will impact the criminal justice system. We will need to consider more comprehensive regulation and perhaps even prohibition of this use of generative AI. But S.B. 524 is a good first step. We hope that more states will follow California and Utah’s lead and pass even stronger bills.

EFF and Other Organizations: Keep Key Intelligence Positions Senate Confirmed

8 October 2025 at 15:19

In a joint letter to the ranking members of the House and Senate intelligence committees, EFF has joined with 20 other organizations, including the ACLU, Brennan Center, CDT, Asian Americans Advancing Justice, and Demand Progress, to express opposition to a rule change that would seriously weaken accountability in the intelligence community. Specifically, under the proposed Senate Intelligence Authorization Act, S. 2342, the general counsels of the Central Intelligence Agency (CIA) and the Office of the Director of National Intelligence (ODNI) would no longer be subject to Senate confirmation.

You can read the entire letter here

In theory, having the most important legal thinkers at these secretive agenciesthe ones who presumably tell an agency if something is legal or notapproved or rejected by the Senate allows elected officials the chance to vet candidates and their beliefs. If, for instance, a confirmation hearing had uncovered that a proposed general counsel for the CIA thinks it's not only legal, but morally justifiable for the agency to spy on US persons on US soil because of their political or religious beliefs–then the Senate would have the chance to reject that person. 

As the letter says, “The general counsels of the CIA and ODNI wield extraordinary influence, and they do so entirely in secret, shaping policies on surveillance, detention, interrogation, and other highly consequential national security matters. Moreover, they are the ones primarily responsible for determining the boundaries of what these agencies may lawfully do. The scope of this power and the fact that it occurs outside of public view is why Senate confirmation is so important.” 

It is for this reason that EFF and our ally organizations urge Congress to remove this provision from the Senate Intelligence Authorization Act.

Hey, San Francisco, There Should be Consequences When Police Spy Illegally

3 October 2025 at 14:07

A San Francisco supervisor has proposed that police and other city agencies should have no financial consequences for breaking a landmark surveillance oversight law. In 2019, organizations from across the city worked together to help pass that law, which required law enforcement to get the approval of democratically elected officials before they bought and used new spying technologies. Bit by bit, the San Francisco Police Department and the Board of Supervisors have weakened that lawbut one important feature of the law remained: if city officials are caught breaking this law, residents can sue to enforce it, and if they prevail they are entitled to attorney fees. 

Now Supervisor Matt Dorsey believes that this important accountability feature is “incentivizing baseless but costly lawsuits that have already squandered hundreds of thousands of taxpayer dollars over bogus alleged violations of a law that has been an onerous mess since it was first enacted.” 

Between 2010 and 2023, San Francisco had to spend roughly $70 million to settle civil suits brought against the SFPD for alleged misconduct ranging from shooting city residents to wrongfully firing whistleblowers. This is not “squandered” money; it is compensating people for injury. We are all governed by laws and are all expected to act accordinglypolice are not exempt from consequences for using their power wrongfully. In the 21st century, this accountability must extend to using powerful surveillance technology responsibly. 

The ability to sue a police department when they violate the law is called a “private right of action” and it is absolutely essential to enforcing the law. Government officials tasked with making other government officials turn square corners will rarely have sufficient resources to do the job alone, and often they will not want to blow the whistle on peers. But city residents empowered to bring a private right of action typically cannot do the job alone, eitherthey need a lawyer to represent them. So private rights of action provide for an attorney fee award to people who win these cases. This is a routine part of scores of public interest laws involving civil rights, labor safeguards, environmental protection, and more.

Without an enforcement mechanism to hold police accountable, many will just ignore the law. They’ve done it before. AB 481 is a California state law that requires police to get elected official approval before attempting to acquire military equipment, including drones. The SFPD knowingly ignored this law. If it had an enforcement mechanism, more police would follow the rules. 

President Trump recently included San Francisco in a list of cities he would like the military to occupy. Law enforcement agencies across the country, either willingly or by compulsion, have been collaborating with federal agencies operating at the behest of the White House. So it would be best for cities to keep their co-optable surveillance infrastructure small, transparent, and accountable. With authoritarianism looming, now is not the time to make police less hard to controlespecially considering SFPD has already disclosed surveillance data to Immigration and Customs Enforcement (ICE) in violation of California state law.  

We’re calling on the Board of Supervisors to reject Supervisor Dorsey’s proposal. If police want to avoid being sued and forced to pay the prevailing party’s attorney fees, they should avoid breaking the laws that govern police surveillance in the city.

Flock’s Gunshot Detection Microphones Will Start Listening for Human Voices

2 October 2025 at 11:45

Flock Safety, the police technology company most notable for their extensive network of automated license plate readers spread throughout the United States, is rolling out a new and troubling product that may create headaches for the cities that adopt it: detection of “human distress” via audio. As part of their suite of technologies, Flock has been pushing Raven, their version of acoustic gunshot detection. These devices capture sounds in public places and use machine learning to try to identify gunshots and then alert police—but EFF has long warned that they are also high powered microphones parked above densely-populated city streets. Cities now have one more reason to follow the lead of many other municipalities and cancel their Flock contracts, before this new feature causes civil liberties harms to residents and headaches for cities. 

In marketing materials, Flock has been touting new features to their Raven product—including the ability of the device to alert police based on sounds, including “distress.” The online ad for the product, which allows cities to apply for early access to the technology, shows the image of police getting an alert for “screaming.” 

It’s unclear how this technology works. For acoustic gunshot detection, generally the microphones are looking for sounds that would signify gunshots (though in practice they often mistake car backfires or fireworks for gunshots). Flock needs to come forward now with an explanation of exactly how their new technology functions. It is unclear how these devices will interact with state “eavesdropping” laws that limit listening to or recording the private conversations that often take place in public. 

Flock is no stranger to causing legal challenges for the cities and states that adopt their products. In Illinois, Flock was accused of violating state law by allowing Immigration and Customs Enforcement (ICE), a federal agency, access to license plate reader data taken within the state. That’s not all. In 2023, a North Carolina judge halted the installation of Flock cameras statewide for operating in the state without a license. When the city of Evanston, Illinois recently canceled its contract with Flock, it ordered the company to take down their license plate readers–only for Flock to mysteriously reinstall them a few days later. This city has now sent Flock a cease and desist order and in the meantime, has put black tape over the cameras. For some, the technology isn’t worth its mounting downsides. As one Illinois village trustee wrote while explaining his vote to cancel the city’s contract with Flock, “According to our own Civilian Police Oversight Commission, over 99% of Flock alerts do not result in any police action.”


Gunshot detection technology is dangerous enough as it is—police showing up to alerts they think are gunfire only to find children playing with fireworks is a recipe for innocent people to get hurt. This isn’t hypothetical: in Chicago a child really was shot at by police who thought they were responding to a shooting thanks to a ShotSpotter alert. Introducing a new feature that allows these pre-installed Raven microphones all over cities to begin listening for human voices in distress is likely to open up a whole new can of unforeseen legal, civil liberties, and even bodily safety consequences.

California, Tell Governor Newsom: Regulate AI Police Reports and Sign S.B. 524

16 September 2025 at 15:30

The California legislature has passed a necessary piece of legislation, S.B. 524, which starts to regulate police reports written by generative AI. Now, it’s up to us to make sure Governor Newsom will sign the bill. 

We must make our voices heard. These technologies obscure certain records and drafts from public disclosure. Vendors have invested heavily on their ability to sell police genAI. 

TAKE ACTION

AI-generated police reports are spreading rapidly. The most popular product on the market is Axon’s Draft One, which is already one of the country’s biggest purveyors of police tech, including body-worn cameras. By bundling their products together, Axon has capitalized on its customer base to spread their untransparent and potentially harmful genAI product. 

Many things can go wrong when genAI is used to write narrative police reports. First, because the product relies on body-worn camera audio, there’s a big chance of the AI draft missing context like sarcasm, culturally-specific or contextual vocabulary use and slang, languages other than English. While police are expected to edit the AI’s version of events to make up for these flaws, many officers will defer to the AI. Police are also supposed to make an independent decision before arresting a person who was identified by face recognition–and police mess that up all the time. The prosecutor of King County, Washington, has forbidden local officers from using Draft One out of fear that it is unreliable.
Then, of course, there’s the matter of dishonesty. Many public defenders and criminal justice practitioners have voiced concerns about what this technology would do to cross examination. If caught with a different story on the stand than the one in their police report, an officer can easily say, “the AI wrote that and I didn’t edit well enough.” The genAI creates a layer of plausible deniability. Carelessness is a very different offense than lying on the stand. 

To make matters worse, an investigation by EFF found that Axon’s Draft One product defies transparency by design. The technology is deliberately built to obscure what portion of a finished report was written by AI and which portions were written by an officer–making it difficult to determine if an officer is lying about which portions of a report were written by AI. 

But now, California has an important chance to join with other states like Utah that are passing laws to reign in these technologies, and what minimum safeguards and transparency must go along with using them. 

S.B. 524 does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent—a small win for communities everywhere. 

So now we’re asking you: help us make a difference. Use EFF’s Action Center to tell Governor Newsom to sign S.B. 524 into law! 

TAKE ACTION

San Francisco Gets An Invasive Billionaire-Bought Surveillance HQ

10 September 2025 at 12:04

San Francisco billionaire Chris Larsen once again has wielded his wallet to keep city residents under the eye of all-seeing police surveillance. 

The San Francisco Police Commission, the Board of Supervisors, and Mayor Daniel Lurie have signed off on Larsen’s $9.4 million gift of a new Real-Time Investigations Center. The plan involves moving the city’s existing police tech hub from the public Hall of Justice not to the city’s brand-new police headquarters but instead to a sublet in the Financial District building of Ripple Labs, Larsen’s crypto-transfer company. Although the city reportedly won’t be paying for the space, the lease reportedly cost Ripple $2.3 million and will last until December 2026. 

The deal will also include a $7.25 million gift from the San Francisco Police Community Foundation that Larsen created. Police foundations are semi-public fundraising arms of police departments that allow them to buy technology and gear that the city will not give them money for.  

In Los Angeles, the city’s police foundation got $178,000 from the company Target to pay for the services of the data analytics company Palantir to use for predictive policing. In Atlanta, the city’s police foundation funds a massive surveillance apparatus as well as the much-maligned Cop City training complex. (Despite police foundations’ insistence that they are not public entities and therefore do not need to be transparent or answer public records requests, a judge recently ordered the Atlanta Police Foundation to release documentation related to Cop City.) 

A police foundation in San Francisco brings the same concerns: that an unaccountable and untransparent fundraising arm shmoozing with corporations and billionaires would fund unpopular surveillance measures without having to reveal much to the public.  

Larsen was one of the deep pockets behind last year’s Proposition E, a ballot measure to supercharge surveillance in the city. The measure usurped the city’s 2019 surveillance transparency and accountability ordinance, which had required the SFPD to get the elected Board of Supervisors’ approval before buying and using new surveillance technology. This common-sense democratic hurdle was, apparently, a bridge too far for the SFPD and for Larsen.  

We’re no fans of real-time crime centers (RTCCs), as they’re often called elsewhere, to start with. They’re basically control rooms that pull together all feeds from a vast warrantless digital dragnet, often including automated license plate readers, fixed cameras, officers’ body-worn cameras, drones, and other sources. It’s a means of consolidating constant surveillance of the entire population, tracking everyone wherever they go and whatever they do – worrisome at any time, but especially in a time of rising authoritarianism.  

Think of what this data could do if it got into federal hands; imagine how vulnerable city residents would be subject to harassment if every move they made was centralized and recorded downtown. But you don’t have to imagine, because SFPD already has been caught sharing automated license plate reader data with out-of-state law enforcement agencies assisting in federal immigration investigations. 

We’re especially opposed to RTCCs using live feeds from non-city surveillance cameras to push that panopticon’s boundaries even wider, as San Francisco’s does. Those semi-private networks of some 15,000 cameras, already abused by SFPD to surveil lawful protests against police violence, were funded in part by – you guessed it – Chris Larsen. 

These technologies could potentially endanger San Franciscans by directing armed police at them due to reliance on a faulty algorithm or by putting already-marginalized communities at further risk of overpolicing and surveillance. But studies find that these technologies just don’t work. If the goal is to stop crime before it happens, to spare someone the hardship and the trauma of getting robbed or hurt, cameras clearly do not accomplish this. There’s plenty of footage of crime occurring that belies the idea that surveillance is an effective deterrent, and although police often look to technology as a silver bullet to fight crime, evidence suggests that it does little to alter the historic ebbs and flows of criminal activity. 

Yet now this unelected billionaire – who already helped gut police accountability and transparency rules and helped fund sketchy surveillance of people exercising their First Amendment rights – wants to bankroll, expand, and host the police’s tech nerve center. 

Policing must be a public function so that residents can control - and demand accountability and transparency from - those who serve and protect but also surveil and track us all. Being financially beholden to private interests erodes the community’s trust and control and can leave the public high and dry if a billionaire’s whims change or conflict with the will of the people. Chris Larsen could have tried to address the root causes of crime that affect our community; instead, he exercises his bank account's muscle to decide that surveillance is best for San Franciscans with less in their wallets. 

Elected officials should have said “thanks but no thanks” to Larsen and ensured that the San Francisco Police Department remained under the complete control and financial auspices of nobody except the people of San Francisco. Rich people should not be allowed to fund the further degradation of our privacy as we go about our lives in our city’s public places. Residents should carefully watch what comes next to decide for themselves whether a false sense of security is worth living under constant, all-seeing, billionaire-bankrolled surveillance. 

California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports

4 September 2025 at 14:48

EFF urges California state lawmakers to pass S.B. 524, authored by Sen. Jesse Arreguín. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.

These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent. 

This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. 

Draft One takes audio from an officer’s body-worn camera, and uses AI  to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important questions about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI. By design, their product does not retain different iterations of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.” 

All lawmakers should pass regulations of AI written police reports. This technology could be nearly everywhere, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the bundling of products, AI-written police reports could be at a vast percentage of police departments. 

AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology. 

We urge California lawmakers to pass S.B. 524. 

Amazon Ring Cashes in on Techno-Authoritarianism and Mass Surveillance

18 July 2025 at 10:37

Ring founder Jamie Siminoff is back at the helm of the surveillance doorbell company, and with him is the surveillance-first-privacy-last approach that made Ring one of the most maligned tech devices. Not only is the company reintroducing new versions of old features which would allow police to request footage directly from Ring users, it is also introducing a new feature that would allow police to request live-stream access to people’s home security devices. 

This is a bad, bad step for Ring and the broader public. 

Ring is rolling back many of the reforms it’s made in the last few years by easing police access to footage from millions of homes in the United States. This is a grave threat to civil liberties in the United States. After all, police have used Ring footage to spy on protestors, and obtained footage without a warrant or consent of the user. It is easy to imagine that law enforcement officials will use their renewed access to Ring information to find people who have had abortions or track down people for immigration enforcement

Siminoff has announced in a memo seen by Business Insider that the company will now be reimagined from the ground up to be “AI first”—whatever that means for a home security camera that lets you see who is ringing your doorbell. We fear that this may signal the introduction of video analytics or face recognition to an already problematic surveillance device. 

It was also reported that employees at Ring will have to show proof that they use AI in order to get promoted. 

Not to be undone with new bad features, they are also planning on rolling back some of the necessary reforms Ring has made: namely partnering with Axon to build a new tool that would allow police to request Ring footage directly from users, and also allow users to consent to letting police livestream directly from their device. 

After years of serving as the eyes and ears of police, the company was compelled by public pressure to make a number of necessary changes. They introduced end-to-end encryption, they ended their formal partnerships with police which were an ethical minefield, and they ended their tool that facilitated police requests for footage directly to customers. Now they are pivoting back to being a tool of mass surveillance. 

Why now? It is hard to believe the company is betraying the trust of its millions of customers in the name of “safety” when violent crime in the United States is reaching near-historically low levels. It’s probably not about their customers—the FTC had to compel Ring to take its users’ privacy seriously. 

No, this is most likely about Ring cashing in on the rising tide of techno-authoritarianism, that is, authoritarianism aided by surveillance tech. Too many tech companies want to profit from our shrinking liberties. Google likewise recently ended an old ethical commitment that prohibited it from profiting off of surveillance and warfare. Companies are locking down billion-dollar contracts by selling their products to the defense sector or police.

Shame on Ring.

Axon’s Draft One Is Designed to Defy Transparency

10 July 2025 at 12:00

Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.

Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.

You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here

Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.

For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system.  Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way. 

Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.

Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI. 

"We love having new toys until the public gets wind of them."

One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.

But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used. 

So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk. 

The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports. 

"We love having new toys until the public gets wind of them," the administrator wrote.

No Record of Who Wrote What

The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like: 

  • Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible? 
  • How often are officers finding and correcting errors made by the AI, and are there patterns to these errors? 
  • If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer? 
  • Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?

"So we don’t store the original draft and that’s by design..."

Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.

Axon is open about not maintaining these records, at least when it markets directly to law enforcement.

In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit). 

Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.

The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy? 

It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out. 

One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.

To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult. 

What the Audit Trail Actually Looks Like 

You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means. 

The first thing to note is that, based on our review of the documentation, there appears to be  no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute). 

This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often. 

Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:

  1. A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
  2.  A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings. 

This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs. 

An audit log on Axon's Draft one which shows only when an officer as generated a report and when they have signed the liability disclosure.

An example of Draft One usage in an audit log.

An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time. 

But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.

Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us: 

"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."

Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.

They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."

We have requested further clarification from Axon, but they have yet to respond. 

However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself. 

In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports. 

An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."

But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future. 


What is Being Done About Draft One

The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill,  and any law enforcement usage would be unlawful.

Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savingswith others agencies extolling its virtues (although their data also shows that results vary even within the department).

In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards. 

Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft. 

In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says

We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.

We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product. 

Conclusion

Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system. 

EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.  

Data Brokers are Selling Your Flight Information to CBP and ICE

9 July 2025 at 19:06

For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.

This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.

One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrantthey’ve also been trying to hide how they know these things about us. 

ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers. 

More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada. 

In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives. 

Movement unrestricted by governments is a hallmark of a free society. In our current moment, when the federal government is threatening legal consequences based on people’s national, religious, and political affiliations, having air travel in and out of the United States tracked by any ARC customer is a recipe for state retribution. 

Sadly, data brokers are doing even broader harm to our privacy. Sensitive location data is harvested from smartphones and sold to cops, internet backbone data is sold to federal counterintelligence agencies, and utility databases containing phone, water, and electricity records are shared with ICE officers. 

At a time when immigration authorities are eroding fundamental freedoms through increased—and arbitrary—actions at the U.S. border, this news further exacerbates concerns that creeping authoritarianism can be fueled by the extraction of our most personal data—all without our knowledge or consent.

The new revelations about ARC’s data sales to CBP and ICE is a fresh reminder of the need for “privacy first” legislation that imposes consent and minimization limits on corporate processing of our data. We also need to pass the “Fourth Amendment is not for sale” act to stop police from bypassing judicial review of their data seizures by means of purchasing data from brokers. And let’s enforce data broker registration laws. 

EFF to US Court of Appeals: Protect Taxpayer Privacy

8 July 2025 at 15:10

EFF has filed an amicus brief in Trabajadores v. Bessent, a case concerning the Internal Revenue Service (IRS) sharing protected personal tax information with the Department of Homeland Security for the purposes of immigration enforcement. Our expertise in  privacy and data sharing makes us the ideal organization to step in and inform the judge: government actions like this have real-world consequences. The IRS’s sharing, and especially bulk sharing, of data is improper and  makes taxpayers vulnerable to inevitable mistakes. As a practical matter, the sharing of data that IRS had previously claimed was protected undermines the trust important civil institutions require in order to be effective. 

You can read the entire brief here

The brief makes two particular arguments. The first is that if the Tax Reform Act, the statute under which the IRS found the authority to share the data, is considered to be ambiguous, and that the statute should be interpreted in light of the legislative intent and historical background, which disfavors disclosure. The brief reads,

Given the historical context, and decades of subsequent agency promises to protect taxpayer confidentiality and taxpayer reliance on those promises, the Administration’s abrupt decision to re-interpret §6103 to allow sharing with ICE whenever a potential “criminal proceeding” can be posited, is a textbook example of an arbitrary and capricious action even if the statute can be read to be ambiguous.

The other argument we make to the court is that data scientists agree: when you try to corroborate information between two databases in which information is only partially identifiable, mistakes happen. We argue:

Those errors result from such mundane issues as outdated information, data entry errors, and taxpayers or tax preparer submission of incorrect names or addresses. If public reports are correct, and officials intend to share information regarding 700,000 or even 7 million taxpayers, the errors will multiply, leading to the mistaken targeting, detention, deportation, and potentially even physical harm to regular taxpayers.

Information silos in the government exist for a reason. Here, it was designed to protect individual privacy and prevent executive abuse that can come with unfettered access to properly-collected information.  The concern motivating Congress to pass the Tax Reform Act was the same as that behind Privacy Act of 1974 and the 1978 Right to Financial Privacy Act. These laws were part of a wave of reforms Congress considered necessary to address the misuse of tax data to spy on and harass political opponents, dissidents, civil rights activists, and anti-war protestors in the 1960s and early 1970s. Congress saw the need to ensure that data collected for one purpose should only be used for that purpose, with very narrow exceptions, or else it is prone to abuse. Yet the IRS is currently sharing information to allow ICE to enforce immigration law.

Taxation in the United States operates through a very simple agreement: the government requires taxes from people working inside the United States in order to function. In order to get people to pay their taxes, including undocumented immigrants living and working in the United States, the IRS has previously promised that the data they collect will not be used against a person for punitive reasons. This increases people to pay taxes and alleviates concerns of people people may have to avoid interacting with the government. But the IRS’s reversal has greatly harmed that trust and has potential to have far reaching and negative ramifications, including decreasing future tax revenue.

Consolidating government information so that the agencies responsible for healthcare, taxes, or financial support are linked to agencies that police, surveil, and fine people is a recipe for disaster. For that reason, EFF is proud to submit this amicus brief in Trabajadores v. Bessent in support of taxpayer privacy. 

❌