Reading view

How to Identify Automated License Plate Readers at the U.S.-Mexico Border

U.S. Customs and Border Protection (CBP), the Drug Enforcement Administration (DEA), and scores of state and local law enforcement agencies have installed a massive dragnet of automated license plate readers (ALPRs) in the US-Mexico borderlands. 

In many cases, the agencies have gone out of their way to disguise the cameras from public view. And the problem is only going to get worse: as recently as July 2025, CBP put out a solicitation to purchase 100 more covert trail cameras with license plate-capture ability. 

Last month, the Associated Press published an in-depth investigation into how agencies have deployed these systems and exploited this data to target drivers. But what do these cameras look like? Here's a guide to identifying ALPR systems when you're driving the open road along the border.

Special thanks to researcher Dugan Meyer and AZ Mirror's Jerod MacDonald-Evoy. All images by EFF and Meyer were taken within the last three years. 

ALPR at Checkpoints and Land Ports of Entry 

All land ports of entry have ALPR systems that collect all vehicles entering and exiting the country. They typically look like this: 

License plate readers along the lanes leading into a border crossing

ALPR systems at the Eagle Pass International Bridge Port of Entry. Source: EFF

Most interior checkpoints, which are anywhere from a few miles to more than 60 from the border, are also equipped with ALPR systems operated by CBP. However, the DEA operates a parallel system at most interior checkpoints in southern border states. 

When it comes to checkpoints, here's the rule of thumb: If you're traveling away from the border, you are typically being captured by a CBP/Border Patrol system (Border Patrol is a sub-agency of CBP). If you're traveling toward the border, it is most likely a DEA system.

Here's a representative example of a CBP checkpoint camera system:

ALPR cameras next to white trailers along the lane into a checkpoint

ALPR system at the Border Patrol checkpoint near Uvalde, Texas. Source: EFF

At a typical port of entry or checkpoint, each vehicle lane will have an ALPR system. We've even seen border patrol checkpoints that were temporarily closed continue to funnel people through these ALPR lanes, even though there was no one on hand to vet drivers face-to-face. According CBP's Privacy Impact Assessments (2017, 2020), CBP keeps this data for 15 years, but generally agents can only search the most recent five years worth of data. 

The scanners were previously made by a company called Perceptics which was infamously hacked, leading to a breach of driver data. The systems have since been "modernized" (i.e. replaced) by SAIC.

Here's a close up of the new systems:

Close up of a camera marked "Front."

Frontal ALPR camera at the checkpoint near Uvalde, Texas. Source: EFF

In 2024, the DEA announced plans to integrate port of entry ALPRs into its National License Plate Reader Program (NLPRP), which the agency says is a network of both DEA systems and external law enforcement ALPR systems that it uses to investigate crimes such as drug trafficking and bulk cash smuggling.

Again, if you're traveling towards the border and you pass a checkpoint, you're often captured by parallel DEA systems set up on the opposite side of the road. However, these systems have also been found to be installed on their own away from checkpoints. 

These are a major component of the DEA's NLPRP, which has a standard retention period of 90 days. This program dates back to at least 2010, according to records obtained by the ACLU. 

Here is a typical DEA system that you will find installed near existing Border Patrol checkpoints:

A series of cameras next to a trailer by the side of the road.

DEA ALPR set-up in southern Arizona. Source: EFF

These are typically made by a different vendor, Selex ES, which also includes the brands ELSAG and Leonardo. Here is a close-up:

Close-up of an ALPR cameras

Close-up of a DEA camera near the Tohono O'odham Nation in Arizona. Source: EFF

Covert ALPR

As you drive along border highways, law enforcement agencies have disguised cameras in order to capture your movements. 

The exact number of covert ALPRs at the border is unknown, but to date we have identified approximately 100 sites. We know CBP and DEA each operate covert ALPR systems, but it isn't always possible to know which agency operates any particular set-up. 

Another rule of thumb: if a covert ALPR has a Motorola Solutions camera (formerly Vigilant Solutions) inside, it's likely a CBP system. If it has a Selex ES camera inside, then it is likely a DEA camera. 

Here are examples of construction barrels with each kind of camera: 

A camera hidden inside an orange traffic barrell

A covert ALPR with a Motorola Solutions ALPR camera near Calexico, Calif. Source: EFF

These are typically seen along the roadside, often in sets of three, but almost always connected to some sort of solar panel. They are often placed behind existing barriers.

A camera hidden inside an orange traffic barrel

A covert ALPR with a Selex ES camera in southern Arizona. Source: EFF

The DEA models are also found by the roadside, but they also can be found inside or near checkpoints. 

If you're curious (as we were), here's what they look like inside, courtesy of the US Patent and Trademark Office:

Patent drawings showing a traffic barrel and the camera inside it

Patent for portable covert license plate reader. Source: USPTO

In addition to orange construction barrels, agencies also conceal ALPRs in yellow sandbarrels. For example, these can be found throughout southern Arizona, especially in the southeastern part of the state.

A camera hidden in a yellow sand barrel.

A covert ALPR system in Arizona. Source: EFF

ALPR Trailers

Sometimes a speed trailer or signage trailer isn't designed so much for safety but to conceal ALPR systems. Sometimes ALPRs are attached to indistinct trailers with no discernible purpose that you'd hardly notice by the side of the road. 

It's important to note that its difficult to know who these belong to, since they aren't often marked. We know that all levels of government, even in the interior of the country, have purchased these set ups.  

Here are some of the different flavors of ALPR trailers:

A speed trailer capturing ALPR. Speed limit 45 sign.

An ALPR speed trailer in Texas. Source: EFF

A white flat trailer by the side of the road with camera portals on either end.

ALPR trailer in Southern California. Source. EFF

An orange trailer with an ALPR camera and a solar panel.

ALPR trailer in Southern California. Source. EFF

An orange trailer with ALPR cameras by the side of the road.

An ALPR unit in southern Arizona. Source: EFF

A trailer with a pole with mounted ALPR cameras in the desert.

ALPR unit in southern Arizona. Source: EFF

A trailer with a solar panel and an ALPR camera.

A Jenoptik Vector ALPR trailer in La Joya, Texas. Source: EFF

One particularly worrisome version of an ALPR trailer is the Jenoptik Vector: at least two jurisdictions along the border have equipped these trailers not only with ALPR, but with TraffiCatch technology that gathers Bluetooth and Wi-Fi identifiers. This means that in addition to gathering plates, these devices would also document mobile devices, such as phones, laptops, and even vehicle entertainment systems.

Stationary ALPR 

Stationary or fixed ALPR is one of the more traditional ways of installing these systems. The cameras are placed on existing utility poles or other infrastructure or on poles installed by the ALPR vendor. 

For example, here's a DEA system installed on a highway arch:

The back of a highway overpass sign with ALPR cameras.

The lower set of ALPR cameras belong to the DEA. Source: Dugan Meyer CC BY

A camera and solar panel attached to a streetlight pole.

ALPR camera in Arizona. Source: Dugan Meyer CC BY

Flock Safety

At the local level, thousands of cities around the United States have adopted fixed ALPR, with the company Flock Safety grabbing a huge chunk of the market over the last few years. County sheriffs and municipal police along the border have also embraced the trend, with many using funds earmarked for border security to purchase these systems. Flock allows these agencies to share with one another and contribute their ALPR scans to a national pool of data. As part of a pilot program, Border Patrol had access to this ALPR data for most of 2025. 

A typical Flock Safety setup involves attaching cameras and solar panels to poles. For example:

A red truck passed a pair of Flock Safety ALPR cameras on poles.

Flock Safety ALPR poles installed just outside the Tohono O'odham Nation in Arizona. Source: EFF

A black Flock Safety camera with a small solar panel

A close-up of a Flock Safety camera in Douglas, Arizona. Source: EFF

We've also seen these camera poles placed outside the Santa Teresa Border Patrol station in New Mexico.

Flock may now be the most common provider nationwide, but it isn't the only player in the field. DHS recently released a market survey of 16 different vendors providing similar technology.  

Mobile ALPR 

ALPR cameras can also be found attached to patrol cars. Here's an example of a Motorola Solutions ALPR attached to a Hidalgo County Constable vehicle in South Texas:

An officer stands beside patrol car. Red circle identifies mobile ALPR

Mobile ALPR on a Hidalgo County Constable vehicle. Source: Weslaco Police Department

These allow officers not only to capture ALPR data in real time as they are driving along, but they will also receive an in-car alert when a scan matches a vehicle on a "hot list," the term for a list of plates that law enforcement has flagged for further investigation. 

Here's another example: 

A masked police officer stands next to a patrol vehicle with two ALPR cameras.

Mobile ALPR in La Mesa, Calif.. Source: La Mesa Police Department Facebook page

Identifying Other Technologies 

EFF has been documenting the wide variety of technologies deployed at the border, including surveillance towers, aerostats, and trail cameras. To learn more, download EFF's zine, "Surveillance Technology at the US-Mexico Border" and explore our map of border surveillance, which includes Google Streetview links so you can see exactly how each installation looks on the ground. Currently we have mapped out most DEA and CBP checkpoint ALPR setups, with covert cameras planned for addition in the near future.

  •  

Rights Organizations Demand Halt to Mobile Fortify, ICE's Handheld Face Recognition Program

Mobile Fortify, the new app used by Immigration and Customs Enforcement (ICE) to use face recognition technology (FRT) to identify people during street encounters, is an affront to the rights and dignity of migrants and U.S. citizens alike. That's why a coalition of privacy, civil liberties and civil rights organizations are demanding the Department of Homeland Security (DHS) shut down the use of Mobile Fortify, release the agency's privacy analyses of the app, and clarify the agency's policy on face recognition. 

As the organizations, including EFF, Asian Americans Advancing Justice and the Project on Government Oversight, write in a letter sent by EPIC

ICE’s reckless field practices compound the harm done by its use of facial recognition. ICE does not allow people to opt-out of being scanned, and ICE agents apparently have the discretion to use a facial recognition match as a definitive determination of a person’s immigration status even in the face of contrary evidence.  Using face identification as a definitive determination of immigration status is immensely disturbing, and ICE’s cavalier use of facial recognition will undoubtedly lead to wrongful detentions, deportations, or worse.  Indeed, there is already at least one reported incident of ICE mistakenly determining a U.S. citizen “could be deported based on biometric confirmation of his identity.”

As if this dangerous use of nonconsensual face recognition isn't bad enough, Mobile Fortify also queries a wide variety of government databases. Already there have been reports that federal officers may be using this FRT to target protesters engaging in First Amendment-protected activities. Yet ICE concluded it did not need to conduct a new Privacy Impact Assessment, which is standard practice for proposed government technologies that collect people's data. 

While Mobile Fortify is the latest iteration of ICE’s mobile FRT, EFF has been tracking this type of technology for more than a decade. In 2013, we identified how a San Diego agency had distributed face recognition-equipped phones to law enforcement agencies across the region, including federal immigration officers. In 2019, EFF helped pass a law temporarily banning collecting biometric data with mobile devices, resulting in the program's cessation

We fought against handheld FRT then, and we will fight it again today. 

  •  

How Cops Are Using Flock Safety's ALPR Network to Surveil Protesters and Activists

It's no secret that 2025 has given Americans plenty to protest about. But as news cameras showed protesters filling streets of cities across the country, law enforcement officers—including U.S. Border Patrol agents—were quietly watching those same streets through different lenses: Flock Safety automated license plate readers (ALPRs) that tracked every passing car. 

Through an analysis of 10 months of nationwide searches on Flock Safety's servers, we discovered that more than 50 federal, state, and local agencies ran hundreds of searches through Flock's national network of surveillance data in connection with protest activity. In some cases, law enforcement specifically targeted known activist groups, demonstrating how mass surveillance technology increasingly threatens our freedom to demonstrate. 

Flock Safety provides ALPR technology to thousands of law enforcement agencies. The company installs cameras throughout their jurisdictions, and these cameras photograph every car that passes, documenting the license plate, color, make, model and other distinguishing characteristics. This data is paired with time and location, and uploaded to a massive searchable database. Flock Safety encourages agencies to share the data they collect broadly with other agencies across the country. It is common for an agency to search thousands of networks nationwide even when they don't have reason to believe a targeted vehicle left the region. 

Via public records requests, EFF obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025. The data shows that agencies logged hundreds of searches related to the 50501 protests in February, the Hands Off protests in April, the No Kings protests in June and October, and other protests in between. 

The Tulsa Police Department in Oklahoma was one of the most consistent users of Flock Safety's ALPR system for investigating protests, logging at least 38 such searches. This included running searches that corresponded to a protest against deportation raids in February, a protest at Tulsa City Hall in support of pro-Palestinian activist Mahmoud Khalil in March, and the No Kings protest in June. During the most recent No Kings protests in mid-October, agencies such as the Lisle Police Department in Illinois, the Oro Valley Police Department in Arizona, and the Putnam County (Tenn.) Sheriff's Office all ran protest-related searches. 

While EFF and other civil liberties groups argue the law should require a search warrant for such searches, police are simply prompted to enter text into a "reason" field in the Flock Safety system. Usually this is only a few words–or even just one.

In these cases, that word was often just “protest.” 

Crime does sometimes occur at protests, whether that's property damage, pick-pocketing, or clashes between groups on opposite sides of a protest. Some of these searches may have been tied to an actual crime that occurred, even though in most cases officers did not articulate a criminal offense when running the search. But the truth is, the only reason an officer is able to even search for a suspect at a protest is because ALPRs collected data on every single person who attended the protest. 

Search and Dissent 

2025 was an unprecedented year of street action. In June and again in October, thousands across the country mobilized under the banner of the “No Kings” movement—marches against government overreach, surveillance, and corporate power. By some estimates, the October demonstrations ranked among the largest single-day protests in U.S. history, filling the streets from Washington, D.C., to Portland, OR. 

EFF identified 19 agencies that logged dozens of searches associated with the No Kings protests in June and October 2025. In some cases the "No Kings" was explicitly used, while in others the term "protest" was used but coincided with the massive protests.

Law Enforcement Agencies that Ran Searches Corresponding with "No Kings" Rallies

  • Anaheim Police Department, Calif.
  • Arizona Department of Public Safety
  • Beaumont Police Department, Texas
  • Charleston Police Department, SC
  • Flagler County Sheriff's Office, Fla.
  • Georgia State Patrol
  • Lisle Police Department, Ill.
  • Little Rock Police Department, Ark.
  • Marion Police Department, Ohio
  • Morristown Police Department, Tenn.
  • Oro Valley Police Department, Ariz.
  • Putnam County Sheriff's Office, Tenn.
  • Richmond Police Department, Va.
  • Riverside County Sheriff's Office, Calif.
  • Salinas Police Department, Calif.
  • San Bernardino County Sheriff's Office, Calif.
  • Spartanburg Police Department, SC
  • Tempe Police Department, Ariz.
  • Tulsa Police Department, Okla.
  • US Border Patrol

For example: 

  • In Washington state, the Spokane County Sheriff's Office listed "no kings" as the reason for three searches on June 15, 2025 [Note: date corrected]. The agency queried 95 camera networks, looking for vehicles matching the description of "work van," "bus" or "box truck." 
  • In Texas, the Beaumont Police Department ran six searches related to two vehicles on June 14, 2025, listing "KINGS DAY PROTEST" as the reason. The queries reached across 1,774 networks. 
  • In California, the San Bernardino County Sheriff's Office ran a single search for a vehicle across 711 networks, logging "no king" as the reason. 
  • In Arizona, the Tempe Police Department made three searches for "ATL No Kings Protest" on June 15, 2025 searching through 425 networks. "ATL" is police code for "attempt to locate." The agency appears to not have been looking for a particular plate, but for any red vehicle on the road during a certain time window.

But the No Kings protests weren't the only demonstrations drawing law enforcement's digital dragnet in 2025. 

For example:

  • In Nevada's state capital, the Carson City Sheriff's Office ran three searches that correspond to the February 50501 Protests against DOGE and the Trump administration. The agency searched for two vehicles across 178 networks with "protest" as the reason.
  • In Florida, the Seminole County Sheriff's Office logged "protest" for five searches that correspond to a local May Day rally.
  • In Alabama, the Homewood Police Department logged four searches in early July 2025 for three vehicles with "PROTEST CASE" and "PROTEST INV." in the reason field. The searches, which probed 1,308 networks, correspond to protests against the police shooting of Jabari Peoples.
  • In Texas, the Lubbock Police Department ran two searches for a Tennessee license plate on March 15 that corresponds to a rally to highlight the mental health impact of immigration policies. The searches hit 5,966 networks, with the logged reason "protest veh."
  • In Michigan, Grand Rapids Police Department ran five searches that corresponded with the Stand Up and Fight Back Rally in February. The searches hit roughly 650 networks, with the reason logged as "Protest."

Some agencies have adopted policies that prohibit using ALPRs for monitoring activities protected by the First Amendment. Yet many officers probed the nationwide network with terms like "protest" without articulating an actual crime under investigation.

In a few cases, police were using Flock’s ALPR network to investigate threats made against attendees or incidents where motorists opposed to the protests drove their vehicle into crowds. For example, throughout June 2025, an Arizona Department of Public Safety officer logged three searches for “no kings rock threat,” and a Wichita (Kan.) Police Department officer logged 22 searches for various license plates under the reason “Crime Stoppers Tip of causing harm during protests.”

Even when law enforcement is specifically looking for vehicles engaged in potentially criminal behavior such as threatening protesters, it cannot be ignored that mass surveillance systems work by collecting data on everyone driving to or near a protestnot just those under suspicion.

Border Patrol's Expanding Reach 

As U.S. Border Patrol (USBP), ICE, and other federal agencies tasked with immigration enforcement have massively expanded operations into major cities, advocates for immigrants have responded through organized rallies, rapid-response confrontations, and extended presences at federal facilities. 

USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics. In June, a few days after the No Kings Protest, USBP ran three searches for a vehicle using the descriptor “Portland Riots.” 

USBP has made extensive use of Flock Safety's system for immigration enforcement, but also to target those who object to its tactics.

USBP also used the Flock Safety network to investigate a motorist who had “extended his middle finger” at Border Patrol vehicles that were transporting detainees. The motorist then allegedly drove in front of one of the vehicles and slowed down, forcing the Border Patrol vehicle to brake hard. An officer ran seven searches for his plate, citing "assault on agent" and "18 usc 111," the federal criminal statute for assaulting, resisting or impeding a federal officer. The individual was charged in federal court in early August. 

USBP had access to the Flock system during a trial period in the first half of 2025, but the company says it has since paused the agency's access to the system. However, Border Patrol and other federal immigration authorities have been able to access the system’s data through local agencies who have run searches on their behalf or even lent them logins

Targeting Animal Rights Activists

Law enforcement's use of Flock's ALPR network to surveil protesters isn't limited to large-scale political demonstrations. Three agencies also used the system dozens of times to specifically target activists from Direct Action Everywhere (DxE), an animal-rights organization known for using civil disobedience tactics to expose conditions at factory farms.

Delaware State Police queried the Flock national network nine times in March 2025 related to DxE actions, logging reasons such as "DxE Protest Suspect Vehicle." DxE advocates told EFF that these searches correspond to an investigation the organization undertook of a Mountaire Farms facility. 

Additionally, the California Highway Patrol logged dozens of searches related to a "DXE Operation" throughout the day on May 27, 2025. The organization says this corresponds with an annual convening in California that typically ends in a direct action. Participants leave the event early in the morning, then drive across the state to a predetermined but previously undisclosed protest site. Also in May, the Merced County Sheriff's Office in California logged two searches related to "DXE activity." 

As an organization engaged in direct activism, DxE has experienced criminal prosecution for its activities, and so the organization told EFF they were not surprised to learn they are under scrutiny from law enforcement, particularly considering how industrial farmers have collected and distributed their own intelligence to police.

The targeting of DxE activists reveals how ALPR surveillance extends beyond conventional and large-scale political protests to target groups engaged in activism that challenges powerful industries. For animal-rights activists, the knowledge that their vehicles are being tracked through a national surveillance network undeniably creates a chilling effect on their ability to organize and demonstrate.

Fighting Back Against ALPR 

Two Flock Safety cameras on a pole

ALPR systems are designed to capture information on every vehicle that passes within view. That means they don't just capture data on "criminals" but on everyone, all the timeand that includes people engaged in their First Amendment right to publicly dissent. Police are sitting on massive troves of data that can reveal who attended a protest, and this data shows they are not afraid to use it. 

Our analysis only includes data where agencies explicitly mentioned protests or related terms in the "reason" field when documenting their search. It's likely that scores more were conducted under less obvious pretexts and search reasons. According to our analysis, approximately 20 percent of all searches we reviewed listed vague language like "investigation," "suspect," and "query" in the reason field. Those terms could well be cover for spying on a protest, an abortion prosecution, or an officer stalking a spouse, and no one would be the wiser–including the agencies whose data was searched. Flock has said it will now require officers to select a specific crime under investigation, but that can and will also be used to obfuscate dubious searches. 

For protestors, this data should serve as confirmation that ALPR surveillance has been and will be used to target activities protected by the First Amendment. Depending on your threat model, this means you should think carefully about how you arrive at protests, and explore options such as by biking, walking, carpooling, taking public transportation, or simply parking a little further away from the action. Our Surveillance Self-Defense project has more information on steps you could take to protect your privacy when traveling to and attending a protest.

For local officials, this should serve as another example of how systems marketed as protecting your community may actually threaten the values your communities hold most dear. The best way to protect people is to shut down these camera networks.  

Everyone should have the right to speak up against injustice without ending up in a database. 

  •  

License Plate Surveillance Logs Reveal Racist Policing Against Romani People

More than 80 law enforcement agencies across the United States have used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety automated license plate reader (ALPR) network, according to audit logs obtained and analyzed by the Electronic Frontier Foundation. 

When police run a search through the Flock Safety network, which links thousands of ALPR systems, they are prompted to leave a reason and/or case number for the search. Between June 2024 and October 2025, cops performed hundreds of searches for license plates using terms such as "roma" and "g*psy," and in many instances, without any mention of a suspected crime. Other uses include "g*psy vehicle," "g*psy group," "possible g*psy," "roma traveler" and "g*psy ruse," perpetuating systemic harm by demeaning individuals based on their race or ethnicity. 

These queries were run through thousands of police departments' systems—and it appears that none of these agencies flagged the searches as inappropriate. 

These searches are, by definition, racist. 

Word Choices and Flock Searches 

We are using the terms "Roma" and “Romani people” as umbrella terms, recognizing that they represent different but related groups. Since 2020, the U.S. federal government has officially recognized "Anti-Roma Racism" as including behaviors such as "stereotyping Roma as persons who engage in criminal behavior" and using the slur "g*psy." According to the U.S. Department of State, this language “leads to the treatment of Roma as an alleged alien group and associates them with a series of pejorative stereotypes and distorted images that represent a specific form of racism.” 

Nevertheless, police officers have run hundreds of searches for license plates using the terms "roma" and "g*psy." (Unlike the police ALPR queries we’ve uncovered, we substitute an asterisk for the Y to avoid repeating this racist slur). In many cases, these terms have been used on their own, with no mention of crime. In other cases, the terms have been used in contexts like "g*psy scam" and "roma burglary," when ethnicity should have no relevance to how a crime is investigated or prosecuted. 

A “g*psy scam” and “roma burglary” do not exist in criminal law separate from any other type of fraud or burglary. Several agencies contacted by EFF have since acknowledged the inappropriate use and expressed efforts to address the issue internally. 

"The use of the term does not reflect the values or expected practices of our department," a representative of the Palos Heights (IL) Police Department wrote to EFF after being confronted with two dozen searches involving the term "g*psy." "We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language."

Of course, the broader issue is that allowing "g*psy" or "Roma" as a reason for a search isn't just offensive, it implies the criminalization an ethnic group. In fact, the Grand Prairie Police Department in Texas searched for "g*psy" six times while using Flock's "Convoy" feature, which allows an agency to identify vehicles traveling together—in essence targeting an entire traveling community of Roma without specifying a crime. 

At the bottom of this post is a list of agencies and the terms they used when searching the Flock system. 

Anti-Roma Racism in an Age of Surveillance 

Racism against Romani people has been a problem for centuries, with one of its most horrific manifestations  during the Holocaust, when the Third Reich and its allies perpetuated genocide by murdering hundreds of thousands of Romani people and sterilizing thousands more. Despite efforts by the UN and EU to combat anti-Roma discrimination, this form of racism persists. As scholars Margareta Matache and Mary T. Bassett explain, it is perpetuated by modern American policing practices: 

In recent years, police departments have set up task forces specialised in “G*psy crimes”, appointed “G*psy crime” detectives, and organised police training courses on “G*psy criminality”. The National Association of Bunco Investigators (NABI), an organisation of law enforcement professionals focusing on “non-traditional organised crime”, has even created a database of individuals arrested or suspected of criminal activity, which clearly marked those who were Roma.

Thus, it is no surprise that a 2020 Harvard University survey of Romani Americans found that 4 out of 10 respondents reported being subjected to racial profiling by police. This demonstrates the ongoing challenges they face due to systemic racism and biased policing. 

Notably, many police agencies using surveillance technologies like ALPRs have adopted some sort of basic policy against biased policing or the use of these systems to target people based on race or ethnicity. But even when such policies are in place, an agency’s failure to enforce them allows these discriminatory practices to persist. These searches were also run through the systems of thousands of other police departments that may have their own policies and state laws that prohibit bias-based policing—yet none of those agencies appeared to have flagged the searches as inappropriate. 

The Flock search data in question here shows that surveillance technology exacerbates racism, and even well-meaning policies to address bias can quickly fall apart without proper oversight and accountability. 

Cops In Their Own Words

EFF reached out to a sample of the police departments that ran these searches. Here are five representative responses we received from police departments in Illinois, California, and Virginia. They do not inspire confidence.

1. Lake County Sheriff's Office, IL 

A screen grab of three searches

In June 2025, the Lake County Sheriff's Office ran three searches for a dark colored pick-up truck, using the reason: "G*PSY Scam." The search covered 1,233 networks, representing 14,467 different ALPR devices. 

In response to EFF, a sheriff's representative wrote via email:

“Thank you for reaching out and for bringing this to our attention.  We certainly understand your concern regarding the use of that terminology, which we do not condone or support, and we want to assure you that we are looking into the matter.

Any sort of discriminatory practice is strictly prohibited at our organization. If you have the time to take a look at our commitment to the community and our strong relationship with the community, I firmly believe you will see discrimination is not tolerated and is quite frankly repudiated by those serving in our organization. 

We appreciate you bringing this to our attention so we can look further into this and address it.”

2. Sacramento Police Department, CA

A screen grab of three searches

In May 2025, the Sacramento Police Department ran six searches using the term "g*psy."  The search covered 468 networks, representing 12,885 different ALPR devices. 

In response to EFF, a police representative wrote:

“Thank you again for reaching out. We looked into the searches you mentioned and were able to confirm the entries. We’ve since reminded the team to be mindful about how they document investigative reasons. The entry reflected an investigative lead, not a disparaging reference. 

We appreciate the chance to clarify.”

3. Palos Heights Police Department, IL

A screen grab of three searches

In September 2024, the Palos Heights Police Department ran more than two dozen searches using terms such as "g*psy vehicle," "g*psy scam" and "g*psy concrete vehicle." Most searches hit roughly 1,000 networks. 

In response to EFF, a police representative said the searches were related to a singular criminal investigation into a vehicle involved in a "suspicious circumstance/fraudulent contracting incident" and is "not indicative of a general search based on racial or ethnic profiling." However, the agency acknowledged the language was inappropriate: 

“The use of the term does not reflect the values or expected practices of our department. We do not condone the use of outdated or offensive terminology, and we will take this inquiry as an opportunity to educate those who are unaware of the negative connotation and to ensure that investigative notations and search reasons are documented in a manner that is accurate, professional, and free of potentially harmful language.

We appreciate your outreach on this matter and the opportunity to provide clarification.”

4. Irvine Police Department, CA

A screen grab of three searches

In February and May 2025, the Irvine Police Department ran eight searches using the term "roma" in the reason field. The searches covered 1,420 networks, representing 29,364 different ALPR devices. 

In a call with EFF, an IPD representative explained that the cases were related to a series of organized thefts. However, they acknowledged the issue, saying, "I think it's an opportunity for our agency to look at those entries and to use a case number or use a different term." 

5. Fairfax County Police Department, VA

A screen grab of three searches

Between December 2024 and April 2025, the Fairfax County Police Department ran more than 150 searches involving terms such as "g*psy case" and "roma crew burglaries." Fairfax County PD continued to defend its use of this language.

In response to EFF, a police representative wrote:

“Thank you for your inquiry. When conducting searches in investigative databases, our detectives must use the exact case identifiers, terms, or names connected to a criminal investigation in order to properly retrieve information. These entries reflect terminology already tied to specific cases and investigative files from other agencies, not a bias or judgment about any group of people. The use of such identifiers does not reflect bias or discrimination and is not inconsistent with our Bias-Based Policing policy within our Human Relations General Order.

A National Trend

Roma individuals and families are not the only ones being systematically and discriminatorily targeted by ALPR surveillance technologies. For example, Flock audit logs show agencies ran 400 more searches using terms targeting Traveller communities more generally, with a specific focus on Irish Travellers, often without any mention of a crime. 

Across the country, these tools are enabling and amplifying racial profiling by embedding longstanding policing biases into surveillance technologies. For example, data from Oak Park, IL, show that 84% of drivers stopped in Flock-related traffic incidents were Black—despite Black people making up only 19% of the local population. ALPR systems are far from being neutral tools for public safety and are increasingly being used to fuel discriminatory policing practices against historically marginalized people. 

The racially coded language in Flock's logs mirrors long-standing patterns of discriminatory policing. Terms like "furtive movements," "suspicious behavior," and "high crime area" have always been cited by police to try to justify stops and searches of Black, Latine, and Native communities. These phrases might not appear in official logs because they're embedded earlier in enforcement—in the traffic stop without clear cause, the undocumented stop-and-frisk, the intelligence bulletin flagging entire neighborhoods as suspect. They function invisibly until a body-worn camera, court filing, or audit brings them to light. Flock's network didn’t create racial profiling; it industrialized it, turning deeply encoded and vague language into scalable surveillance that can search thousands of cameras across state lines. 

Two Flock safety cameras at 90 degrees on a pole with a solar panel

The Path Forward

U.S. Sen. Ron Wyden, D-OR, recently recommended that local governments reevaluate their decisions to install Flock Safety in their communities. We agree, but we also understand that sometimes elected officials need to see the abuse with their own eyes first. 

We know which agencies ran these racist searches, and they should be held accountable. But we also know that the vast majority of Flock Safety's clients—thousands of police and sheriffs—also allowed those racist searches to run through their Flock Safety systems unchallenged. 

Elected officials must act decisively to address the racist policing enabled by Flock's infrastructure. First, they should demand a complete audit of all ALPR searches conducted in their jurisdiction and a review of search logs to determine (a) whether their police agencies participated in discriminatory policing and (b) what safeguards, if any, exist to prevent such abuse. Second, officials should institute immediate restrictions on data-sharing through Flock's nationwide network. As demonstrated by California law, for example, police agencies should not be able to share their ALPR data with federal authorities or out-of-state agencies, thus eliminating a vehicle for discriminatory searches spreading across state lines.

Ultimately, elected officials must terminate Flock Safety contracts entirely. The evidence is now clear: audit logs and internal policies alone cannot prevent a surveillance system from becoming a tool for racist policing. The fundamental architecture of Flock—thousands of cameras feeding into a nationwide searchable network—makes discrimination inevitable when enforcement mechanisms fail.

As Sen. Wyden astutely explained, "local elected officials can best protect their constituents from the inevitable abuses of Flock cameras by removing Flock from their communities.”

Table Overview and Notes

The following table compiles terms used by agencies to describe the reasons for searching the Flock Safety ALPR database. In a small number of cases, we removed additional information such as case numbers, specific incident details, and officers' names that were present in the reason field. 

We removed one agency from the list due to the agency indicating that the word was a person's name and not a reference to Romani people. 

In general, we did not include searches that used the term "Romanian," although many of those may also be indicative of anti-Roma bias. We also did not include uses of "traveler" or “Traveller” when it did not include a clear ethnic modifier; however, we believe many of those searches are likely relevant.  

A text-based version of the spreadsheet is available here

November 12, 2025 update: Due to a clerical error a term was misattributed to Eugene Police Department in Oregon. We regret the error. 

  •  

Flock Safety and Texas Sheriff Claimed License Plate Search Was for a Missing Person. It Was an Abortion Investigation.

New documents and court records obtained by EFF show that Texas deputies queried Flock Safety's surveillance data in an abortion investigation, contradicting the narrative promoted by the company and the Johnson County Sheriff that she was “being searched for as a missing person,” and that “it was about her safety.” 

The new information shows that deputies had initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman’s self-managed abortion, and consulted prosecutors about possibly charging her. 

Johnson County Sheriff Adam King repeatedly denied the automated license plate reader (ALPR) search was related to enforcing Texas's abortion ban, and Flock Safety called media accounts "false," "misleading" and "clickbait." However, according to a sworn affidavit by the lead detective, the case was in fact a death investigation in response to a report of an abortion, and deputies collected documentation of the abortion from the "reporting person," her alleged romantic partner. The death investigation remained open for weeks, with detectives interviewing the woman and reviewing her text messages about the abortion. 

The documents show that the Johnson County District Attorney's Office informed deputies that "the State could not statutorily charge [her] for taking the pill to cause the abortion or miscarriage of the non-viable fetus."

An excerpt from the police report, in which the detective talks about receiving evidence and calling the District Attorney's Office

An excerpt from the JCSO detective's sworn affidavit.

The records include previously unreported details about the case that shocked public officials and reproductive justice advocates across the country when it was first reported by 404 Media in May. The case serves as a clear warning sign that when data from ALPRs is shared across state lines, it can put people at risk, including abortion seekers. And, in this case, the use may have run afoul of laws in Washington and Illinois.

A False Narrative Emerges

Last May, 404 Media obtained data revealing the Johnson County Sheriff’s Office conducted a nationwide search of more than 83,000 Flock ALPR cameras, giving the reason in the search log: “had an abortion, search for female.” Both the Sheriff's Office and Flock Safety have attempted to downplay the search as akin to a search for a missing person, claiming deputies were only looking for the woman to “check on her welfare” and that officers found a large amount of blood at the scene – a claim now contradicted by the responding investigator’s affidavit. Flock Safety went so far as to assert that journalists and advocates covering the story intentionally misrepresented the facts, describing it as "misreporting" and "clickbait-driven." 

As Flock wrote of EFF's previous commentary on this case (bold in original statement): 

Earlier this month, there was purposefully misleading reporting that a Texas police officer with the Johnson County Sheriff’s Office used LPR “to target people seeking reproductive healthcare.” This organization is actively perpetuating narratives that have been proven false, even after the record has been corrected.

According to the Sheriff in Johnson County himself, this claim is unequivocally false.

… No charges were ever filed against the woman and she was never under criminal investigation by Johnson County. She was being searched for as a missing person, not as a suspect of a crime.

That sheriff has since been arrested and indicted on felony counts in an unrelated sexual harassment and whistleblower retaliation case. He has also been charged with aggravated perjury for allegedly lying to a grand jury. EFF filed public records requests with Johnson County to obtain a more definitive account of events.

The newly released incident report and affidavit unequivocally describe the case as a "death investigation" of a "non-viable fetus." These documents also undermine the claim that the ALPR search was in response to a medical emergency, since, in fact, the abortion had occurred more than two weeks before deputies were called to investigate. 

In recent years, anti-abortion advocates and prosecutors have increasingly attempted to use “fetal homicide” and “wrongful death” statutes – originally intended to protect pregnant people from violence – to criminalize abortion and pregnancy loss. These laws, which exist in dozens of states, establish legal personhood of fetuses and can be weaponized against people who end their own pregnancies or experience a miscarriage. 

In fact, a new report from Pregnancy Justice found that in just the first two years since the Supreme Court’s decision in Dobbs, prosecutors initiated at least 412 cases charging pregnant people with crimes related to pregnancy, pregnancy loss, or birth–most under child neglect, endangerment, or abuse laws that were never intended to target pregnant people. Nine cases included allegations around individuals’ abortions, such as possession of abortion medication or attempts to obtain an abortion–instances just like this one. The report also highlights how, in many instances, prosecutors use tangentially related criminal charges to punish people for abortion, even when abortion itself is not illegal.

By framing their investigation of a self-administered abortion as a “death investigation” of a “non-viable fetus,” Texas law enforcement was signaling their intent to treat the woman’s self-managed abortion as a potential homicide, even though Texas law does not allow criminal charges to be brought against an individual for self-managing their own abortion. 

The Investigator's Sworn Account

Over two days in April, the woman went through the process of taking medication to induce an abortion. Two weeks later, her partner–who would later be charged with domestic violence against her–reported her to the sheriff's office. 

The documents confirm that the woman was not present at the home when the deputies “responded to the death (Non-viable fetus).” As part of the investigation, officers collected evidence that the man had assembled of the self-managed abortion, including photographs, the FedEx envelope the medication arrived in, and the instructions for self-administering the medication. 

Another Johnson County official ran two searches through the ALPR database with the note "had an abortion, search for female," according to Flock Safety search logs obtained by EFF. The first search, which has not been previously reported, probed 1,295 Flock Safety networks–composed of 17,684 different cameras–going back one week. The second search, which was originally exposed by 404 Media, was expanded to a full month of data across 6,809 networks, including 83,345 cameras. Both searches listed the same case number that appears on the death investigation/incident report obtained by EFF. 

After collecting the evidence from the woman’s partner, the investigators say they consulted the district attorney’s office, only to be told they could not press charges against the woman. 

An excerpt from the detective's affidavit about investigating the abortion

An excerpt from the JCSO detective's sworn affidavit.

Nevertheless, when the subject showed up at the Sheriff’s office a week later, officers were under the impression that she came to “to tell her side of the story about the non-viable fetus.” They interviewed her, inspected text messages about the abortion on her phone, and watched her write a timeline of events. 

Only after all that did they learn that she actually wanted to report a violent assault by her partner–the same individual who had called the police to report her abortion. She alleged that less than an hour after the abortion, he choked her, put a gun to her head, and made her beg for her life. The man was ultimately charged in connection with the assault, and the case is ongoing. 

This documented account runs completely counter to what law enforcement and Flock have said publicly about the case. 

Johnson County Sheriff Adam King told 404 media: "Her family was worried that she was going to bleed to death, and we were trying to find her to get her to a hospital.” He later told the Dallas Morning News: “We were just trying to check on her welfare and get her to the doctor if needed, or to the hospital."

The account by the detective on the scene makes no mention of concerned family members or a medical investigator. To the contrary, the affidavit says that they questioned the man as to why he "waited so long to report the incident," and he responded that he needed to "process the event and call his family attorney." The ALPR search was recorded 2.5 hours after the initial call came in, as documented in the investigation report.

The Desk Sergeant's ReportOne Month Later

EFF obtained a separate "case supplemental report" written by the sergeant who says he ran the May 9 ALPR searches. 

The sergeant was not present at the scene, and his account was written belatedly on June 5, almost a month after the incident and nearly a week after 404 Media had already published the sheriff’s alternative account of the Flock Safety search, kicking off a national controversy. The sheriff's office provided this sergeant's report to Dallas Morning News. 

In the report, the sergeant claims that the officers on the ground asked him to start "looking up" the woman due to there being "a large amount of blood" found at the residencean unsubstantiated claim that is in conflict with the lead investigator’s affidavit. The sergeant repeatedly expresses that the situation was "not making sense." He claims he was worried that the partner had hurt the woman and her children, so "to check their welfare," he used TransUnion's TLO commercial investigative database system to look up her address. Once he identified her vehicle, he ran the plate through the Flock database, returning hits in Dallas.

A data table showing the log of searches

Two abortion-related searches in the JCSO's Flock Safety ALPR audit log

The sergeant's report, filed after the case attracted media attention, notably omits any mention of the abortion at the center of the investigation, although it does note that the caller claimed to have found a fetus. The report does not explain, or even address, why the sergeant used the phrase "had an abortion, search for female” as the official reason for the ALPR searches in the audit log. 

It's also unclear why the sergeant submitted the supplemental report at all, weeks after the incident. By that time, the lead investigator had already filed a sworn affidavit that contradicted the sergeant's account. For example, the investigator, who was on the scene, does not describe finding any blood or taking blood samples into evidence, only photographs of what the partner believed to be the fetus. 

One area where they concur: both reports are clearly marked as a "death investigation." 

Correcting the Record

Since 404 Media first reported on this case, King has perpetuated the false narrative, telling reporters that the woman was never under investigation, that officers had not considered charges against her, and that "it was all about her safety."

But here are the facts: 

  • The reports that have been released so far describe this as a death investigation.
  • The lead detective described himself as "working a death investigation… of a non-viable fetus" at the time he interviewed the woman (a week after the ALPR searches).
  • The detective wrote that they consulted the district attorney's office about whether they could charge her for "taking the pill to cause the abortion or miscarriage of the non-viable fetus." They were told they could not.
  • Investigators collected a lot of data, including photos and documentation of the abortion, and ran her through multiple databases. They even reviewed her text messages about the abortion. 
  • The death investigation was open for more than a month.

The death investigation was only marked closed in mid-June, weeks after 404 Media's article and a mere days before the Dallas Morning News published its report, in which the sheriff inaccurately claimed the woman "was not under investigation at any point."

Flock has promoted this unsupported narrative on its blog and in multimedia appearances. We did not reach out to Flock for comment on this article, as their communications director previously told us the company will not answer our inquiries until we "correct the record and admit to your audience that you purposefully spread misinformation which you know to be untrue" about this case. 

Consider the record corrected: It turns out the truth is even more damning than initially reported.

The Aftermath

In the aftermath of the original reporting, government officials began to take action. The networks searched by Johnson County included cameras in Illinois and Washington state, both states where abortion access is protected by law. Since then: 

  • The Illinois Secretary of State has announced his intent to “crack down on unlawful use of license plate reader data,” and urged the state’s Attorney General to investigate the matter. 
  • In California, which also has prohibitions on sharing ALPR out of state and for abortion-ban enforcement, the legislature cited the case in support of pending legislation to restrict ALPR use.
  • Ranking Members of the House Oversight Committee and one of its subcommittees launched a formal investigation into Flock’s role in “enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans.” 
  • Senator Ron Wyden secured a commitment from Flock to protect Oregonians' data from out-of-state immigration and abortion-related queries.

In response to mounting pressure, Flock announced a series of new features supposedly designed to prevent future abuses. These include blocking “impermissible” searches, requiring that all searches include a “reason,” and implementing AI-driven audit alerts to flag suspicious activity. But as we've detailed elsewhere, these measures are cosmetic at best—easily circumvented by officers using vague search terms or reusing legitimate case numbers. The fundamental architecture that enabled the abuse remains unchanged. 

Meanwhile, as the news continued to harm the company's sales, Flock CEO Garrett Langley embarked on a press tour to smear reporters and others who had raised alarms about the usage. In an interview with Forbes, he even doubled down and extolled the use of the ALPR in this case. 

So when I look at this, I go “this is everything’s working as it should be.” A family was concerned for a family member. They used Flock to help find her, when she could have been unwell. She was physically okay, which is great. But due to the political climate, this was really good clickbait.

Nothing about this is working as it should, but it is working as Flock designed. 

The Danger of Unchecked Surveillance

A pair of Flock Safety cameras on a pole, with a solar panel

Flock Safety ALPR cameras

This case reveals the fundamental danger of allowing companies like Flock Safety to build massive, interconnected surveillance networks that can be searched across state lines with minimal oversight. When a single search query can access more than 83,000 cameras spanning almost the entire country, the potential for abuse is staggering, particularly when weaponized against people seeking reproductive healthcare. 

The searches in this case may have violated laws in states like Washington and Illinois, where restrictions exist specifically to prevent this kind of surveillance overreach. But those protections mean nothing when a Texas deputy can access cameras in those states with a few keystrokes, without external review that the search is legal and legitimate under local law. In this case, external agencies should have seen the word "abortion" and questioned the search, but the next time an officer is investigating such a case, they may use a more vague or misleading term to justify the search. In fact, it's possible it has already happened. 

ALPRs were marketed to the public as tools to find stolen cars and locate missing persons. Instead, they've become a dragnet that allows law enforcement to track anyone, anywhere, for any reason—including investigating people's healthcare decisions. This case makes clear that neither the companies profiting from this technology nor the agencies deploying it can be trusted to tell the full story about how it's being used.

States must ban law enforcement from using ALPRs to investigate healthcare decisions and prohibit sharing data across state lines. Local governments may try remedies like reducing data retention period to minutes instead of weeks or months—but, really, ending their ALPR programs altogether is the strongest way to protect their most vulnerable constituents. Without these safeguards, every license plate scan becomes a potential weapon against a person seeking healthcare.

  •  

Axon’s Draft One Is Designed to Defy Transparency

Axon Enterprise’s Draft One — a generative artificial intelligence product that writes police reports based on audio from officers’ body-worn cameras — seems deliberately designed to avoid audits that could provide any accountability to the public, an EFF investigation has found.

Our review of public records from police agencies already using the technology — including police reports, emails, procurement documents, department policies, software settings, and more — as well as Axon’s own user manuals and marketing materials revealed that it’s often impossible to tell which parts of a police report were generated by AI and which parts were written by an officer.

You can read our full report, which details what we found in those documents, how we filed those public records requests, and how you can file your own, here

Everyone should have access to answers, evidence, and data regarding the effectiveness and dangers of this technology. Axon and its customers claim this technology will revolutionize policing, but it remains to be seen how it will change the criminal justice system, and who this technology benefits most.

For months, EFF and other organizations have warned about the threats this technology poses to accountability and transparency in an already flawed criminal justice system.  Now we've concluded the situation is even worse than we thought: There is no meaningful way to audit Draft One usage, whether you're a police chief or an independent researcher, because Axon designed it that way. 

Draft One uses a ChatGPT variant to process body-worn camera audio of public encounters and create police reports based only on the captured verbal dialogue; it does not process the video. The Draft One-generated text is sprinkled with bracketed placeholders that officers are encouraged to add additional observations or information—or can be quickly deleted. Officers are supposed to edit Draft One's report and correct anything the Gen AI misunderstood due to a lack of context, troubled translations, or just plain-old mistakes. When they're done, the officer is prompted to sign an acknowledgement that the report was generated using Draft One and that they have reviewed the report and made necessary edits to ensure it is consistent with the officer’s recollection. Then they can copy and paste the text into their report. When they close the window, the draft disappears.

Any new, untested, and problematic technology needs a robust process to evaluate its use by officers. In this case, one would expect police agencies to retain data that ensures officers are actually editing the AI-generated reports as required, or that officers can accurately answer if a judge demands to know whether, or which part of, reports used by the prosecution were written by AI. 

"We love having new toys until the public gets wind of them."

One would expect audit systems to be readily available to police supervisors, researchers, and the public, so that anyone can make their own independent conclusions. And one would expect that Draft One would make it easy to discern its AI product from human product – after all, even your basic, free word processing software can track changes and save a document history.

But Draft One defies all these expectations, offering meager oversight features that deliberately conceal how it is used. 

So when a police report includes biased language, inaccuracies, misinterpretations, or even outright lies, the record won't indicate whether the officer or the AI is to blame. That makes it extremely difficult, if not impossible, to assess how the system affects justice outcomes, because there is little non-anecdotal data from which to determine whether the technology is junk. 

The disregard for transparency is perhaps best encapsulated by a short email that an administrator in the Frederick Police Department in Colorado, one of Axon's first Draft One customers, sent to a company representative after receiving a public records request related to AI-generated reports. 

"We love having new toys until the public gets wind of them," the administrator wrote.

No Record of Who Wrote What

The first question anyone should have about a police report written using Draft One is which parts were written by AI and which were added by the officer. Once you know this, you can start to answer more questions, like: 

  • Are officers meaningfully editing and adding to the AI draft? Or are they reflexively rubber-stamping the drafts to move on as quickly as possible? 
  • How often are officers finding and correcting errors made by the AI, and are there patterns to these errors? 
  • If there is inappropriate language or a fabrication in the final report, was it introduced by the AI or the officer? 
  • Is the AI overstepping in its interpretation of the audio? If a report says, "the subject made a threatening gesture," was that added by the officer, or did the AI make a factual assumption based on the audio? If a suspect uses metaphorical slang, does the AI document literally? If a subject says "yeah" through a conversation as a verbal acknowledgement that they're listening to what the officer says, is that interpreted as an agreement or a confession?

"So we don’t store the original draft and that’s by design..."

Ironically, Draft One does not save the first draft it generates. Nor does the system store any subsequent versions. Instead, the officer copies and pastes the text into the police report, and the previous draft, originally created by Draft One, disappears as soon as the window closes. There is no log or record indicating which portions of a report were written by the computer and which portions were written by the officer, except for the officer's own recollection. If an officer generates a Draft One report multiple, there's no way to tell whether the AI interprets the audio differently each time.

Axon is open about not maintaining these records, at least when it markets directly to law enforcement.

In this video of a roundtable discussion about the Draft One product, Axon’s senior principal product manager for generative AI is asked (at the 49:47 mark) whether or not it’s possible to see after-the-fact which parts of the report were suggested by the AI and which were edited by the officer. His response (bold and definition of RMS added): 

So we don’t store the original draft and that’s by design and that’s really because the last thing we want to do is create more disclosure headaches for our customers and our attorney’s offices—so basically the officer generates that draft, they make their edits, if they submit it into our Axon records system then that’s the only place we store it, if they copy and paste it into their third-party RMS [records management system] system as soon as they’re done with that and close their browser tab, it’s gone. It’s actually never stored in the cloud at all so you don’t have to worry about extra copies floating around.”

To reiterate: Axon deliberately does not store the original draft written by the Gen AI, because "the last thing" they want is for cops to have to provide that data to anyone (say, a judge, defense attorney or civil liberties non-profit). 

Following up on the same question, Axon's Director of Strategic Relationships at Axon Justice suggests this is fine, since a police officer using a word processor wouldn't be required to save every draft of a police report as they're re-writing it. This is, of course, misdirection and not remotely comparable. An officer with a word processor is one thought process and a record created by one party; Draft One is two processes from two parties–Axon and the officer. Ultimately, it could and should be considered two records: the version sent to the officer from Axon and the version edited by the officer.

The days of there being unexpected consequences of police departments writing reports in word processors may be over, but Draft One is still unproven. After all, every AI-evangelist, including Axon, claims this technology is a game-changer. So, why wouldn't an agency want to maintain a record that can establish the technology’s accuracy? 

It also appears that Draft One isn't simply hewing to long-established norms of police report-writing; it may fundamentally change them. In one email, the Campbell Police Department's Police Records Supervisor tells staff, “You may notice a significant difference with the narrative format…if the DA’s office has comments regarding our report narratives, please let me know.” It's more than a little shocking that a police department would implement such a change without fully soliciting and addressing the input of prosecutors. In this case, the Santa Clara County District Attorney had already suggested police include a disclosure when Axon Draft One is used in each report, but Axon's engineers had yet to finalize the feature at the time it was rolled out. 

One of the main concerns, of course, is that this system effectively creates a smokescreen over truth-telling in police reports. If an officer lies or uses inappropriate language in a police report, who is to say that the officer wrote it or the AI? An officer can be punished severely for official dishonesty, but the consequences may be more lenient for a cop who blames it on the AI. There has already been an occasion when engineers discovered a bug that allowed officers on at least three occasions to circumvent the "guardrails" that supposedly deter officers from submitting AI-generated reports without reading them first, as Axon disclosed to the Frederick Police Department.

To serve and protect the public interest, the AI output must be continually and aggressively evaluated whenever and wherever it's used. But Axon has intentionally made this difficult. 

What the Audit Trail Actually Looks Like 

You may have seen news stories or other public statements asserting that Draft One does, indeed, have auditing features. So, we dug through the user manuals to figure out what that exactly means. 

The first thing to note is that, based on our review of the documentation, there appears to be  no feature in Axon software that allows departments to export a list of all police officers who have used Draft One. Nor is it possible to export a list of all reports created by Draft One, unless the department has customized its process (we'll get to that in a minute). 

This is disappointing because, without this information, it's near impossible to do even the most basic statistical analysis: how many officers are using the technology and how often. 

Based on the documentation, you can only export two types of very basic logs, with the process differing depending on whether an agency uses Evidence or Records/Standards products. These are:

  1. A log of basic actions taken on a particular report. If the officer requested a Draft One report or signed the Draft One liability disclosure related to the police report, it will show here. But nothing more than that.
  2.  A log of an individual officer/user's basic activity in the Axon Evidence/Records system. This audit log shows things such as when an officer logs into the system, uploads videos, or accesses a piece of evidence. The only Draft One-related activities this tracks are whether the officer ran a Draft One request, signed the Draft One liability disclosure, or changed the Draft One settings. 

This means that, to do a comprehensive review, an evaluator may need to go through the record management system and look up each officer individually to identify whether that officer used Draft One and when. That could mean combing through dozens, hundreds, or in some cases, thousands of individual user logs. 

An audit log on Axon's Draft one which shows only when an officer as generated a report and when they have signed the liability disclosure.

An example of Draft One usage in an audit log.

An auditor could also go report-by-report as well to see which ones involved Draft One, but the sheer number of reports generated by an agency means this method would require a massive amount of time. 

But can agencies even create a list of police reports that were co-written with AI? It depends on whether the agency has included a disclosure in the body of the text, such as "I acknowledge this report was generated from a digital recording using Draft One by Axon." If so, then an administrator can use "Draft One" as a keyword search to find relevant reports.

Agencies that do not require that language told us they could not identify which reports were written with Draft One. For example, one of those agencies and one of Axon's most promoted clients, the Lafayette Police Department in Indiana, told us: 

"Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed."

Meanwhile, in response to a similar public records request, the Palm Beach County Sheriff's Office, which does require a disclosure at the bottom of each report that it had been written by AI, was able to isolate more than 3,000 Draft One reports generated between December 2024 and March 2025.

They told us: "We are able to do a keyword and a timeframe search. I used the words draft one and the system generated all the draft one reports for that timeframe."

We have requested further clarification from Axon, but they have yet to respond. 

However, as we learned from email exchanges between the Frederick Police Department in Colorado and Axon, Axon is tracking police use of the technology at a level that isn't available to the police department itself. 

In response to a request from Politico's Alfred Ng in August 2024 for Draft One-generated police reports, the police department was struggling to isolate those reports. 

An Axon representative responded: "Unfortunately, there’s no filter for DraftOne reports so you’d have to pull a User’s audit trail and look for Draft One entries. To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy."

But then, Axon followed up: "We track which reports use Draft One internally so I exported the data." Then, a few days later, Axon provided Frederick with some custom JSON code to extract the data in the future. 


What is Being Done About Draft One

The California Assembly is currently considering SB 524, a bill that addresses transparency measures for AI-written police reports. The legislation would require disclosure whenever police use artificial intelligence to partially or fully write official reports, as well as “require the first draft created to be retained for as long as the final report is retained.” Because Draft One is designed not to retain the first or any previous drafts of a report, it cannot comply with this common-sense and first-step bill,  and any law enforcement usage would be unlawful.

Axon markets Draft One as a solution to a problem police have been complaining about for at least a century: that they do too much paperwork. Or, at least, they spend too much time doing paperwork. The current research on whether Draft One remedies this issue shows mixed results, from some agencies claiming it has no real-time savingswith others agencies extolling its virtues (although their data also shows that results vary even within the department).

In the justice system, police must prioritize accuracy over speed. Public safety and a trustworthy legal system demand quality over corner-cutting. Time saved should not be the only metric, or even the most important one. It's like evaluating a drive-through restaurant based only on how fast the food comes out, while deliberately concealing the ingredients and nutritional information and failing to inspect whether the kitchen is up to health and safety standards. 

Given how untested this technology is and how much the company is in a hurry to sell Draft One, many local lawmakers and prosecutors have taken it upon themselves to try to regulate the product’s use. Utah is currently considering a bill that would mandate disclosure for any police reports generated by AI, thus sidestepping one of the current major transparency issues: it’s nearly impossible to tell which finished reports started as an AI draft. 

In King County, Washington, which includes Seattle, the district attorney’s office has been clear in their instructions: police should not use AI to write police reports. Their memo says

We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.

We urge other prosecutors to follow suit and demand that police in their jurisdiction not unleash this new, unaccountable, and intentionally opaque AI product. 

Conclusion

Police should not be using AI to write police reports. There are just too many unanswered questions about how AI would translate the audio of situations and whether police will actually edit those drafts, while simultaneously, there is no way for the public to reliably discern what was written by a person and what was written by a computer. This is before we even get to the question of how these reports might compound and exacerbate existing problems or create new ones in an already unfair and untransparent criminal justice system. 

EFF will continue to research and advocate against the use of this technology but for now, the lesson is clear: Anyone with control or influence over police departments, be they lawmakers or people in the criminal justice system, has a duty to be informed about the potential harms and challenges posed by AI-written police reports.  

  •  

EFF's Guide to Getting Records About Axon's Draft One AI-Generated Police Reports

The moment Axon Enterprise announced a new product, Draft One, that would allow law enforcement officers to use artificial intelligence to automatically generate incident report narratives based on body-worn camera audio, everyone in the police accountability community immediately started asking the same questions

What do AI-generated police reports look like? What kind of paper trail does this system leave? How do we get a hold of documentation using public records laws? 

Unfortunately, obtaining these records isn't easy. In many cases, it's straight-up impossible. 

Read our full report on how Axon's Draft One defies transparency expectations by design here

In some jurisdictions, the documents are walled off behind government-created barriers. For example, California fully exempts police narrative reports from public disclosure, while other states charge fees to access individual reports that become astronomical if you want to analyze the output in bulk. Then there are technical barriers: Axon's product itself does not allow agencies to isolate reports that contain an AI-generated narrative, although an agency can voluntarily institute measures to make them searchable by a keyword.  

This spring, EFF tested out different public records request templates and sent them to dozens of law enforcement agencies we believed are using Draft One. 

We asked each agency for the Draft One-generated police reports themselves, knowing that in most cases this would be a long shot. We also dug into Axon's user manuals to figure out what kind of logs are generated and how to carefully phrase our public records request to get them. We asked for the current system settings for Draft One, since there are a lot of levers police administrators can pull that drastically change how and when officers can use the software. We also requested the standard records that we usually ask for when researching new technologies: procurement documents, agreements, training manuals, policies, and emails with vendors. 

Like all mass public records campaigns, the results were… mixed. Some agencies were refreshingly open with their records. Others assessed us records fees well outside the usual range for a non-profit organization. 

What we learned about the process is worth sharing. Axon has thousands of clients nationwide that use its Tasers, body-worn cameras and bundles of surveillance equipment, and the company is using those existing relationships to heavily promote Draft One.  We expect many more cities to deploy the technology over the next few years. Watchdogging police use of AI will require a nationwide effort by journalists, advocacy organizations and community volunteers.

Below we’re sharing some sample language you can use in your own public records requests about Draft One — but be warned. It’s likely that the more you include, the longer it might take and the higher the fees will get. The template language and our suggestions for filing public records requests are not legal advice. If you have specific questions about a public records request you filed, consult a lawyer.

1. Police Reports

Language to try in your public records request:

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received. If your agency requires a Draft One disclosure in the text of the message, you can use "Draft One" as a keyword search term.

Or

  • The [NUMBER] most recent police report narratives that were generated using Axon Draft One between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

If you are curious about a particular officer's Draft One usage, you can also ask for their reports specifically. However it may be helpful to obtain their usage log first (see section 2).

  • All police report narratives, supplemental report narratives, warrant affidavits, statements, and other narratives generated by [OFFICER NAME] using Axon Draft One to document law enforcement-related incidents for the period between [DATE IN THE LAST FEW WEEKS] and the date this request is received.

We suggest using weeks, not months, because the sheer number of reports can get costly very quickly.

As an add-on to Axon's evidence and records management platforms, Draft One uses ChatGPT to convert audio taken from Axon body-worn cameras into the so-called first draft of the narrative portion of a police report. 

When Politico surveyed seven agencies in September 2024, reporter Alfred Ng found that police administrators did not have the technical ability to identify which reports contained AI-generated language. As Ng reported. There is no way for us to search for these on our end,” a Lafayette, IN police captain told Ng. Six months later, EFF received the same no-can-do response from the Lafayette Police Department.

 Regarding the attached request, we do not have the ability to create a list of reports created through Draft One. They are not searchable. This request is now closed.

Although Lafayette Police could not create a list on their own, it turns out that Axon's engineers can generate these reports for police if asked. When the Frederick Police Department in Colorado received a similar request from Ng, the agency contacted Axon for help. The company does internally track reports written with Draft One and was able to provide a spreadsheet of Draft One reports (.csv) and even provided Frederick Police with computer code to allow the agency to create similar lists in the future. Axon told them they would look at making this a feature in the future, but that appears not to have happened yet. 

But we also struck gold with two agencies: the Palm Beach County Sheriff's Office (PBCSO) in Florida and the Lake Havasu City Police Department in Arizona. In both cases, the agencies require officers to include a disclosure that they used Draft One at the end of the police narrative. Here's a slide from the Palm Beach County Sheriff's Draft One training:

A slide titled "Narrative Footer" that tells officers they must include a disclosure at the bottom of their report.

And here's the boilerplate disclosure: 

I acknowledge this report was generated from a digital recording using Draft One by Axon. I further acknowledge that I have I reviewed the report, made any necessary edits, and believe it to be an accurate representation of my recollection of the reported events. I am willing to testify to the accuracy of this report.

As small a gesture as it may seem, that disclosure makes all the difference when it comes to responding to a public records request. Lafayette Police could not isolate the reports because its policy does not require the disclosure. A Frederick Police Department sergeant noted in an email to Axon that they could isolate reports when the auto-disclosure was turned on, but not after they decided to turn it off. This year, Utah legislators introduced a bill to require this kind of disclosure on AI-generated reports.

As the PBCSO records manager told us: "We are able to do a keyword and a timeframe search. I used the words ‘Draft One’ and the system generated all the Draft One reports for that timeframe." In fact, in Palm Beach County and Lake Havasu, records administrators dug up huge numbers of records. But, once we saw the estimated price tag, we ultimately narrowed our request to just 10 reports.

Here is an example of a report from PBCSO, which only allows Draft One to be used in incidents that don't involve a criminal charge. As a result, many of the reports were related to mental health or domestic dispute responses.  

A police report peppered with redactions.

A machine readable text version of this report is available here. Full version here.

And here is an example from the Lake Havasu City Police Department, whose clerk was kind enough to provide us with a diverse sample of requests.

A police report peppered with redactions.

A machine readable text version of this report is available here. Full version here.

EFF redacted some of these records to protect the identity of members of the public who were captured on body-worn cameras. Black-bar redactions were made by the agencies, while bars with X's were made by us. You can view all the examples we received below: 

We also received police reports (perhaps unintentionally) from two other agencies that were contained as email attachments in response to another part of our request (see section 7).

2. Audit Logs

Language to try in your public records request:

Note: You can save time by determining in advance whether the agency uses Axon Evidence or Axon Records and Standards, then choose the applicable option below. If you don't know, you can always request both.

Audit logs from Axon Evidence

  • Audit logs for the period December 1, 2024 through the date this request is received, for the 10 most recently active users.
    According to Axon's online user manual, through Axon Evidence agencies are able to view audit logs of individual officers to ascertain whether they have requested the use of Draft One, signed a Draft One liability disclosure or changed Draft One settings (https://my.axon.com/s/article/View-the-audit-trail-in-Axon-Evidence-Draft-One?language=en_US). In order to obtain these audit logs, you may follow the instructions on this Axon page: https://my.axon.com/s/article/Viewing-a-user-audit-trail?language=en_US.
    In order to produce a list of the 10 most recent active users, you may click the arrow next to "Last Active" then select the most 10 recent. The [...] menu item allows you to export the audit log. We would prefer these audits as .csv files if possible.
    Alternatively, if you know the names of specific officers, you can name them rather than selecting the most recent.

Or

Audit logs from Axon Records and Axon Standards

  • According to Axon's online user manual, through Axon Records and Standards, agencies are able to view audit logs of individual officers to ascertain whether they have requested a Draft One draft or signed a Draft One liability disclosure. https://my.axon.com/s/article/View-the-audit-log-in-Axon-Records-and-Standards-Draft-One?language=en_US
    To obtain these logs using the Axon Records Audit Tool, follow these instructions: https://my.axon.com/s/article/Audit-Log-Tool-Axon-Records?language=en_US
    a. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "M" into the audit tool. If no user comes up with M, please try "Mi."
    b. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "J" into the audit tool. If no user comes up with J, please try "Jo."
    c. Audit logs for the period December 1, 2024 through the date this request is received for the first user who comes up when you enter the letter "S" into the audit tool. If no user comes up with S, please try "Sa."

You could also tell the agency you are only interested in Draft One related items, which may save the agency time in reviewing and redacting the documents.

Generally, many of the basic actions a police officer takes using Axon technology — whether it's signing in, changing a password, accessing evidence or uploading BWC footage — is logged in the system. 

This also includes some actions when an officer uses Draft One. However, the system only logs three types of activities: requesting that Draft One generate a report, signing a Draft One liability disclosure, or changing Draft One's settings. And these reports are one of the only ways to identify which reports were written with AI and how widely the technology is used. 

Unfortunately, Axon appears to have designed its system so that administrators cannot create a list of all Draft One activities taken by the entire police force. Instead, all they can do is view an individual officer's audit log to see when they used Draft One or look at the log for a particular piece of evidence to see if Draft One was used. These can be exported as a spreadsheet or a PDF. (When the Frederick Police Department asked Axon how to create a list of Draft One reports, the Axon rep told them that feature wasn't available and they would have to follow the above method. "To set expectations, it’s not going to be graceful, but this wasn’t a scenario we anticipated needing to make easy," Axon wrote in August 2024, then suggested it might come up with a long-term solution. We emailed Axon back in March to see if this was still the case, but they did not provide a response.) 

Here's an excerpt from a PDF version from the Bishop Police Department in California:

A document titled "Audit Trail" for user Brian Honenstein that has entries on February 6, 2025 for "Draft One Request Received" and "Signed for a Draft One Liability Disclosure"

Here are some additional audit log examples: 

If you know the name of an individual officer, you can try to request their audit logs to see if they used Draft One. Since we didn't have a particular officer in mind, we had to get creative. 

An agency may manage their documents with one of a few different Axon offerings: Axon Evidence, Axon Records, or Axon Standards. The process for requesting records is slightly different depending on which one is used. We dug through the user manuals and came up with a few ways to export a random(ish) example. We also linked the manuals and gave clear instructions for the records officers.

With Axon Evidence, an administrator can simply sort the system to show the 10 most recent users then export their usage logs. With Axon Records/Standard, the administrator has to start typing in a name and then it auto-populates with suggestions. So, we ask them to export the audit logs for the first few users who came up when they typed the letters M, J, and S into the search (since those letters are common at the beginning of names). 

Unfortunately, this method is a little bit of a gamble. Many officers still aren't using Draft One, so you may end up with hundreds of pages of logs that don't mention Draft One at all (as was the case with the records we received from Monroe County, NY).

3. Settings

Language to try in your public records request: 

  • A copy of all settings and configurations made by this agency in its use of the Axon Draft One platform, including all opt-in features that the department has elected to use and the incident types for which the software can be used. A screen capture of these settings will suffice.

We knew the Draft One system offers department managers the option to customize how it can be used, including the categories of crime for which reports can be generated and whether or not there is a disclaimer automatically added to the bottom of the report disclosing the use of AI in its generation. So we asked for a copy of these settings and configurations. In some cases, agencies claimed this was exempted from their public records laws, while other agencies did provide the information. Here is an example from the Campbell Police Department in California: 

A screengrab of the settings menu within Draft One, with "Default acknowledgement," "Narration as input for Draft One, and "Default Footer" selected.

(It's worth noting that while Campbell does require each police report to contain a disclosure that Draft One was used, the California Public Records Act exempts police reports from being released.)

Examples of settings: 

4. Procurement-related Documents and Agreements

Language to try in your public records request:

  • All contracts, memorandums of understanding, and any other written agreements between this agency and Axon related to the use of Draft One, Narrative Assistant, or any other AI-assisted report generation tool provided by Axon. Responsive records include all associated amendments, exhibits, and supplemental and supporting documentation, as well as all relevant terms of use, licensing agreements, and any other guiding materials. If access to Draft One or similar tools is being provided via an existing contract or through an informal agreement, please provide the relevant contract or the relevant communication or agreement that facilitated the access. This includes all agreements, both formal and informal, including all trial access, even if that access does not or did not involve financial obligations.

It can be helpful to know how much Draft One costs, how many user licenses the agency paid for, and what the terms of the agreement are. That information is often contained in records related to the contracting process. Agencies will often provide these records with minimal pushback or redactions. Many of these records may already be online, so a requester can save time and effort by looking around first. These are often found in city council agenda packets. Also, law enforcement agencies often will bump these requests to the city or county clerk instead. 

Here's an excerpt from the Monroe County Sheriff's Office in New York:

An Axon purchasing agreement indicating the agency is adding Draft One to its existing suite of services.

These kinds of procurement records describe the nature and cost of the relationship between the police department and the company. They can be very helpful for understanding how much a continuing service subscription will cost and what else was bundled in as part of the purchase. Draft One, so far, is often accessed as an additional feature along with other Axon products. 

We received too many documents to list them all, but here is a representative example of some of the other documents you might receive, courtesy of the Dacono Police Department in Colorado.

5. Training, Manuals and Policies

All training materials relevant to Draft One or Axon Narrative Assistant generated by this agency, including but not limited to:

  • All training material provided by Axon to this agency regarding its use of Draft One;
  • All internal training materials regarding the use of Draft One;
  • All user manuals, other guidance materials, help documents, or related materials;
  • Guides, safety tests, and other supplementary material that mention Draft One provided by Axon from January 1, 2024 and the date this request is received;
  • Any and all policies and general orders related to the use of Draft One, the Narrative Assistant, or any other AI-assisted report generation offerings provided by Axon (An example of one such policy can be found here: https://cdn.muckrock.com/foia_files/2024/11/26/608_Computer_Software_and_Transcription-Assisted_Report_Generation.pdf).

In addition to seeing when Draft One was used and how it was acquired, it can be helpful to know what rules officers must follow, what directions they're given for using it, and what features are available to users. That's where manuals, policies and training materials come in handy. 

User manuals are typically going to come from Axon itself. In general, if you can get your hands on one, this will help you to better understand the mechanisms of the system, and it will help you align the way you craft your request with the way the system actually works. Luckily, Axon has published many of the materials online and we've already obtained the user manual from multiple agencies. However, Axon does update the manual from time to time, so it can be helpful to know which version the agency is working from.

Here's one from December 2024:

Policies are internal police department guidance for using Draft One. Not all agencies have developed a policy, but the ones they do have may reveal useful information, such as other records you might be able to request. Here are some examples: 

Training and user manuals also might reveal crucial information about how the technology is used. In some cases these documents are provided by Axon to the customer. These records may illuminate the specific direction that departments are emphasizing about using the product.

 "A Dummies Guide to AI in Police Report Writing" from the Pasco Police Department

Here are a few examples of training presentations:

 6. Evaluations

Language to try in your public records request:

  • All final reports, evaluations, reports, or other documentation concluding or summarizing a trial or evaluation period or pilot project

Many departments are getting access to Draft One as part of a trial or pilot program. The outcome of those experiments with the product can be eye-opening or eyebrow-raising. There might also be additional data or a formal report that reviews what the department was hoping to get from the experience, how they structured any evaluation of its time-saving value for the department, and other details about how officers did or did not use Draft One. 

Here are some examples we received: 

7. Communications

Language to try in your public records request:

• All communications sent or received by any representative of this agency with individuals representing Axon referencing the following term, including emails and attachments:

  • Draft One
  • Narrative Assistant
  • AI-generated report

• All communications sent to or received by any representative of this agency with each of the following email addresses, including attachments:

  • [INSERT EMAIL ADDRESSES]

Note: We are not including the specific email addresses here that we used, since they are subject to change when employees are hired, promoted, or find new gigs. However, you can find the emails we used in our requests on MuckRock.

The communications we wanted were primarily the emails between Axon and the law enforcement agency. As you can imagine, these emails could reveal the back-and-forth between the company and its potential customers, and these conversations could include the marketing pitch made to the department, the questions and problems police may have had with it, and more. 

In some cases, these emails reveal cozy relationships between salespeople and law enforcement officials. Take, for example, this email exchange between the Dickinson Police Department and an Axon rep:

An email exchange in which a Dickinson Police officer invites an Axon rep to a golf tournament, and the Axon rep calls him "brother."

Or this email between a Frederick Police Department sergeant and an Axon representative, in which a sergeant describes himself as "doing sales" for Axon by providing demos to other agencies.

An email from a Frederick Police Officer to an Axon rep.

A machine readable text version of this email is available here.

Emails like this also show what other agencies are considering using Draft One in the future. For example, in this email we received from the Campbell Police Department shows that the San Francisco Police Department was testing Draft One as early as October 2024 (the usage was confirmed in June 2025 by the San Francisco Standard).

 An SFPD email asking for advice on using Draft One.

A machine readable text version of this email is available here.

Your mileage will certainly vary for these email requests, in part because the ability for agencies to search their communications can vary. Some agencies can search by a keyword like "Draft One” or "Axon" and while other agencies can only search by the specific email address. 

Communications can be one of the more expensive parts of the request. We've found that adding a date range and key terms or email addresses has helped limit these costs and made our requests a bit clearer for the agency. Axon sends a lot of automated emails to its subscribers, so the agency may quote a large fee for hundreds or thousands of emails that aren't particularly interesting. Many agencies respond positively if a requester reaches out to say they're open to narrowing or focusing their request. 

Asking for Body-Worn Camera Footage 

One of the big questions is how do the Draft One-generated reports compare to the BWC audio the narrative is based on? Are the reports accurate? Are they twisting people's words? Does Draft One hallucinate?

Finding these answers requires both obtaining the police report and the footage of the incident that was fed into the system. The laws and process for obtaining BWC footage vary dramatically state to state, and even department to department. Depending on where you live, it can also get expensive very quickly, since some states allow agencies to charge you not only for the footage but the time it takes to redact the footage. So before requesting footage, read up on your state’s public access laws or consult a lawyer.

However, once you have a copy of a Draft One report, you should have enough information to file a follow-up request for the BWC footage. 

So far, EFF has not requested BWC footage. In addition to the aforementioned financial and legal hurdles, the footage can implicate both individual privacy and transparency regarding police activity. As an organization that advocates for both, we want to make sure we get this balance right. Afterall, BWCs are a surveillance technology that collects intelligence on suspects, victims, witnesses, and random passersby. When the Palm Beach County Sheriff's Office gave us an AI-generated account of a teenager being hospitalized for suicidal ideations, we of course felt that the minor's privacy outweighed our interest in evaluating the AI. But do we feel the same way about a Draft One-generated narrative about a spring break brawl in Lake Havasu? 

Ultimately, we may try to obtain a limited amount of BWC footage, but we also recognize that we shouldn't make the public wait while we work it out for ourselves. Accountability requires different methods, different expertise, and different interests, and with this guide we hope to not only shine light on Draft One, but to provide the schematics for others–including academics, journalists, and local advocates–to build their own spotlights to expose police use of this problematic technology.

Where to Find More Docs 

Despite the variation in how agencies responded, we did have some requests that proved fruitful. You can find these requests and the documents we got via the linked police department names below.

Please note that we filed two different types of requests, so not all the elements above may be represented in each link.

Via Document Cloud (PDFs)

Via MuckRock (Assorted filetypes)

Special credit goes to EFF Research Assistant Jesse Cabrera for public records request coordination. 

  •