Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

What Can Go Wrong When Police Use AI to Write Reports?

8 May 2024 at 11:52

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Responding to ShotSpotter, Police Shoot at Child Lighting Fireworks

22 March 2024 at 19:10

This post was written by Rachel Hochhauser, an EFF legal intern

We’ve written multiple times about the inaccurate and dangerous “gunshot detection” tool, Shotspotter. A recent near-tragedy in Chicago adds to the growing pile of evidence that cities should drop the product.

On January 25, while responding to a ShotSpotter alert, a Chicago police officer opened fire on an unarmed “maybe 14 or 15” year old child in his backyard. Three officers approached the boy’s house, with one asking “What you doing bro, you good?” They heard a loud bang, later determined to be fireworks, and shot at the child. Fortunately, no physical injuries were recorded. In initial reports, police falsely claimed that they fired at a “man” who had fired on officers.

In a subsequent assessment of the event, the Chicago Civilian Office of Police Accountability (“COPA”) concluded that “a firearm was not used against the officers.” Chicago Police Superintendent Larry Snelling placed all attending officers on administrative duty for 30 days and is investigating whether the officers violated department policies.

ShotSpotter is the largest company which produces and distributes audio gunshot detection for U.S. cities and police departments. Currently, it is used by 100 law enforcement agencies. The system relies on sensors positioned on buildings and lamp posts, which purportedly detect the acoustic signature of a gunshot. The information is then forwarded to humans who purportedly have the expertise to verify whether the sound was gunfire (and not, for example, a car backfiring), and whether to deploy officers to the scene.

ShotSpotter claims that its technology is “97% accurate,” a figure produced by the marketing department and not engineers. The recent Chicago shooting shows this is not accurate. Indeed, a 2021 study in Chicago found that, in a period of 21 months, ShotSpotter resulted in police acting on dead-end reports over 40,000 times. Likewise, the Cook County State’s Attorney’s office concluded that ShotSpotter had “minimal return on investment” and only resulted in arrest for 1% of proven shootings, according to a recent CBS report. The technology is predominantly used in Black and Latinx neighborhoods, contributing to the over-policing of these areas. Police responding to ShotSpotter arrive at the scenes expecting gunfire and are on edge and therefore more likely to draw their firearms.

Finally, these sensors invade the right to privacy. Even in public places, people often have a reasonable expectation of privacy and therefore a legal right not to have their voices recorded. But these sound sensors risk the capture and leaking of private conversation. In People v. Johnson in California, a court held such recordings from ShotSpotter to be admissible evidence.

In February, Chicago’s Mayor announced that the city would not be renewing its contract with Shotspotter. Many other cities have cancelled or are considering cancelling use of the tool.

This technology endangers lives, disparately impacts communities of color, and encroaches on the privacy rights of individuals. It has a history of false positives and poses clear dangers to pedestrians and residents. It is urgent that these inaccurate and harmful systems be removed from our streets.

Cops Running DNA-Manufactured Faces Through Face Recognition Is a Tornado of Bad Ideas

22 March 2024 at 11:52

In keeping with law enforcement’s grand tradition of taking antiquated, invasive, and oppressive technologies, making them digital, and then calling it innovation, police in the U.S. recently combined two existing dystopian technologies in a brand new way to violate civil liberties. A police force in California recently employed the new practice of taking a DNA sample from a crime scene, running this through a service provided by US company Parabon NanoLabs that guesses what the perpetrators face looked like, and plugging this rendered image into face recognition software to build a suspect list.

Parts of this process aren't entirely new. On more than one occasion, police forces have been found to have fed images of celebrities into face recognition software to generate suspect lists. In one case from 2017, the New York Police Department decided its suspect looked like Woody Harrelson and ran the actor’s image through the software to generate hits. Further, software provided by US company Vigilant Solutions enables law enforcement to create “a proxy image from a sketch artist or artist rendering” to enhance images of potential suspects so that face recognition software can match these more accurately.

Since 2014, law enforcement have also sought the assistance of Parabon NanoLabs—a company that alleges it can create an image of the suspect’s face from their DNA. Parabon NanoLabs claim to have built this system by training machine learning models on the DNA data of thousands of volunteers with 3D scans of their faces. It is currently the only company offering phenotyping and only in concert with a forensic genetic genealogy investigation. The process is yet to be independently audited, and scientists have affirmed that predicting face shapes—particularly from DNA samples—is not possible. But this has not stopped law enforcement officers from seeking to use it, or from running these fabricated images through face recognition software.

Simply put: police are using DNA to create a hypothetical and not at all accurate face, then using that face as a clue on which to base investigations into crimes. Not only is this full dice-roll policing, it also threatens the rights, freedom, or even the life of whoever is unlucky enough to look a little bit like that artificial face.

But it gets worse.

In 2020, a detective from the East Bay Regional Park District Police Department in California asked to have a rendered image from Parabon NanoLabs run through face recognition software. This 3D rendering, called a Snapshot Phenotype Report, predicted that—among other attributes—the suspect was male, had brown eyes, and fair skin. Found in police records published by Distributed Denial of Secrets, this appears to be the first reporting of a detective running an algorithmically-generated rendering based on crime-scene DNA through face recognition software. This puts a second layer of speculation between the actual face of the suspect and the product the police are using to guide investigations and make arrests. Not only is the artificial face a guess, now face recognition (a technology known to misidentify people)  will create a “most likely match” for that face.

These technologies, and their reckless use by police forces, are an inherent threat to our individual privacy, free expression, information security, and social justice. Face recognition tech alone has an egregious history of misidentifying people of color, especially Black women, as well as failing to correctly identify trans and nonbinary people. The algorithms are not always reliable, and even if the technology somehow had 100% accuracy, it would still be an unacceptable tool of invasive surveillance capable of identifying and tracking people on a massive scale. Combining this with fabricated 3D renderings from crime-scene DNA exponentially increases the likelihood of false arrests, and exacerbates existing harms on communities that are already disproportionately over-surveilled by face recognition technology and discriminatory policing. 

There are no federal rules that prohibit police forces from undertaking these actions. And despite the detective’s request violating Parabon NanoLabs’ terms of service, there is seemingly no way to ensure compliance. Pulling together criteria like skin tone, hair color, and gender does not give an accurate face of a suspect, and deploying these untested algorithms without any oversight places people at risk of being a suspect for a crime they didn’t commit. In one case from Canada, Edmonton Police Service issued an apology over its failure to balance the harms to the Black community with the potential investigative value after using Parabon’s DNA phenotyping services to identify a suspect.

EFF continues to call for a complete ban on government use of face recognition—because otherwise these are the results. How much more evidence do law markers need that police cannot be trusted with this dangerous technology? How many more people need to be falsely arrested and how many more reckless schemes like this one need to be perpetrated before legislators realize this is not a sustainable method of law enforcement? Cities across the United States have already taken the step to ban government use of this technology, and Montana has specifically recognized a privacy interest in phenotype data. Other cities and states need to catch up or Congress needs to act before more people are hurt and our rights are trampled. 

Lucy Parsons Labs Takes Police Foundation to Court for Open Records Requests

19 March 2024 at 18:55

The University of Georgia (UGA) School of Law’s First Amendment Clinic has filed an Open Records Request lawsuit to demand public records from the private Atlanta Police Foundation (APF). The lawsuit, filed at the behest of the Atlanta Community Press Collective and Electronic Frontier Alliance-member Lucy Parsons Labs, is seeking records relating to the Atlanta Public Safety Training Center, which activists refer to as Cop City. While the facility will be used for public law enforcement and emergency services agencies, including training on surveillance technologies, the lease is held by the APF.  

The argument is that the Atlanta Police Foundation, as the nonprofit holding the lease for facilities intended for use by government agencies, should be subject to the same state Open Records Act as to its functions that are on behalf of law enforcement agencies. Beyond the Atlanta Public Safety Training Center, the APF also manages the Atlanta Police Department’s Video Surveillance Center, which integrates footage from over 16,000 public and privately-held surveillance cameras across the city. 

According to UGA School of Law’s First Amendment Clinic, “The Georgia Supreme Court has held that records in the custody of a private entity that relate to services or functions the entity performs for or on behalf of the government are public records under the Georgia Open Records Act.” 

Police foundations frequently operate in this space. They are private, non-profit organizations with boards made up of corporations and law firms that receive monetary or equipment donations that they then gift to their local law enforcement agencies. These gifts often bypass council hearings or other forms of public oversight. 

Lucy Parsons Labs’ Ed Vogel said, “At the core of the struggle over the Atlanta Public Safety Training Center is democratic practice. Decisions regarding this facility should not be made behind closed doors. This lawsuit is just one piece of that. The people have a right to know.” 

You can read the lawsuit here. 

San Diego City Council Breaks TRUST

15 March 2024 at 14:54

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended to ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinance was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance – responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory, and are vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said: 

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

The Atlas of Surveillance Removes Ring, Adds Third-Party Investigative Platforms

8 March 2024 at 16:32

Running the Atlas of Surveillance, our project to map and inventory police surveillance across the United States, means experiencing emotional extremes.

Whenever we announce that we've added new data points to the Atlas, it comes with a great sense of satisfaction. That's because it almost always means that we're hundreds or even thousands of steps closer to achieving what only a few years ago would've seemed impossible: comprehensively documenting the surveillance state through our partnership with students at the University of Nevada, Reno Reynolds School of Journalism.

At the same time, it's depressing as hell. That's because it also reflects how quickly and dangerously the surveillance technology is metastasizing.

We have the exact opposite feeling when we remove items from the Atlas of Surveillance. It's a little sad to see our numbers drop, but at the same time that change in data usually means that a city or county has eliminated a surveillance program.

That brings us to the biggest change in the Atlas since our launch in 2018. This week, we removed 2,530 data points: an entire category of surveillance. With the announcement from Amazon that its home surveillance company Ring will no longer facilitate warrantless requests for consumer video footage, we've decided to sunset that particular dataset.

While law enforcement agencies still maintain accounts on Ring's Neighbors social network, it seems to serve as a communications tool, a function on par with services like Nixle and Citizen, which we currently don't capture in the Atlas. That's not to say law enforcement won't be gathering footage from Ring cameras: they will, through legal process or by directly asking residents to give them access via the Fusus platform. But that type of surveillance doesn't result from merely having a Neighbors account (agencies without accounts can use these methods to obtain footage), which was what our data documented. You can still find out which agencies are maintaining camera registries through the Atlas. 

Ring's decision was a huge victory – and the exact outcome EFF and other civil liberties groups were hoping for. It also has opened up our capacity to track other surveillance technologies growing in use by law enforcement. If we were going to remove a category, we decided we should add one too.

Atlas of Surveillance users will now see a new type of technology: Third-Party Investigative Platforms, or TPIPs. Commons TPIP products include Thomson Reuters CLEAR, LexisNexis Accurint Virtual Crime Center, TransUnion TLOxp, and SoundThinking CrimeTracer (formerly Coplink X from Forensic Logic). These are technologies we've been watching for awhile, but have been struggling to categorize and define. But here's the definition we've come up with:

Third-Party Investigative Platforms are cloud-based software systems that law enforcement agencies subscribe to in order to access, share, mine, and analyze various sources of investigative data. Some of the data the agencies upload themselves, but the systems also provide access to data from other law enforcement, as well as from commercial sources and data brokers. Many products offer AI features, such as pattern identification, face recognition, and predictive analytics. Some agencies employ multiple TPIPs.

We are calling this new category a beta feature in the Atlas, since we are still figuring out how best to research and compile this data nationwide. You'll find fairly comprehensive data on the use of CrimeTracer in Tennessee and Massachusetts, because both states provide the software to local law enforcement agencies throughout the state. Similarly, we've got a large dataset for the use of the Accurint Virtual Crime Center in Colorado, due to a statewide contract. (Big thanks to Prof. Ran Duan's Data Journalism students for working with us to compile those lists!) We've also added more than 60 other agencies around the country, and we expect that dataset to grow as we hone our research methods.

If you've got information on the use of TPIPs in your area, don't hesitate to reach out. You can email us at aos@eff.org, submit a tip through our online form, or file a public records request using the template that EFF and our students have developed to reveal the use of these platforms. 

We Flew a Plane Over San Francisco to Fight Proposition E. Here's Why.

29 February 2024 at 15:19

Proposition E, which San Franciscans will be asked to vote on in the March 5 election, is so dangerous that last weekend we chartered a plane to inform our neighbors about what the ballot measure does and urge them to vote NO on it. If you were in Dolores Park, Golden Gate Park, Chinatown, or anywhere in between on Saturday, there’s a chance you saw it, with a huge banner flying through the sky: “No Surveillance State! No on Prop E.”

Despite the fact that the San Francisco Chronicle has endorsed a NO vote on Prop E, and even quoted some police who don’t find its changes useful to keeping the public safe, proponents of Prop E have raised over $1 million to push this unnecessary, ill-thought out, and downright dangerous ballot measure.

San Francisco, Say NOPE: Vote NO on Prop E on March 5

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

What Does Prop E Do?

Prop E is a haphazard mess of proposals that tries to capitalize on residents’ fear of crime in an attempt to gut commonsense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the civilian-staffed Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Prop E would also amend existing law passed in 2019 to protect San Franciscans from invasive, untested, or biased police surveillance technologies. Currently, if the SFPD wants to acquire a new technology, they must provide a detailed use policy to the democratically-elected Board of Supervisors, in a process that allows for public comment. The Board then votes on whether and how the police can use the technology.

Prop E guts these protective measures designed to bring communities into the conversation about public safety. If Prop E passes on March 5, then the SFPD can unilaterally use any technology they want for a full year without the Board’s approval, without publishing an official policy about how they’d use the technology, and without allowing community members to voice their concerns.

A plane flying over san francsico skyline carrying a banner asking people to vote no on Prop E

Why is Prop E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency, accountability, or democratic control.

San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Prop E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

What Technology Would Prop E Allow Police to Use?

That's the thing—we don't know, and if Prop E passes, we may never know. Today, if the SFPD decides to use a piece of surveillance technology, there is a process for sharing that information with the public. With Prop E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. 

Even though we don't know what technologies the SFPD is eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And according to the City Attorney, Prop E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology. San Francisco currently has a ban on police using remote-controlled robots to deploy deadly force, but if passed, Prop E would allow police to invest in technologies like taser-armed drones without any oversight or potential for elected officials to block the sale. 

Don’t let police experiment on San Franciscans with dangerous, untested surveillance technologies. Say NOPE to a surveillance state. Vote NO on Prop E on March 5.  

What is Proposition E and Why Should San Francisco Voters Oppose It?

2 February 2024 at 18:39

If you live in San Francisco, there is an election on March 5, 2024 during which voters will decide a number of specific local ballot measures—including Proposition E. Proponents of Proposition E have raised over $1 million …but what does the measure actually do? This will break down what the initiative actually does, why it is dangerous for San Franciscans, and why you should oppose it.

What Does Proposition E Do?

Proposition E is a “kitchen sink" approach to public safety that capitalizes on residents’ fear of crime in an attempt to gut common-sense democratic oversight of the San Francisco Police Department (SFPD). In addition to removing certain police oversight authority from the Police Commission and expanding the circumstances under which police may conduct high-speed vehicle chases, Proposition E would also amend existing laws passed in 2019 to protect San Franciscans from invasive, untested, or biased police technologies.

Currently, if police want to acquire a new technology, they have to go through a procedure known as CCOPS—Community Control Over Police Surveillance. This means that police need to explain why they need a new piece of technology and provide a detailed use policy to the democratically-elected Board of Supervisors, who then vote on it. The process also allows for public comment so people can voice their support for, concerns about, or opposition to the new technology. This process is in no way designed to universally deny police new technologies. Instead, it ensures that when police want new technology that may have significant impacts on communities, those voices have an opportunity to be heard and considered. San Francisco police have used this procedure to get new technological capabilities as recently as Fall 2022 in a way that stimulated discussion, garnered community involvement and opposition (including from EFF), and still passed.

Proposition E guts these common-sense protective measures designed to bring communities into the conversation about public safety. If Proposition E passes on March 5, then the SFPD can use any technology they want for a full year without publishing an official policy about how they’d use the technology or allowing community members to voice their concerns—or really allowing for any accountability or transparency at all.

Why is Proposition E Dangerous and Unnecessary?

Across the country, police often buy and deploy surveillance equipment without residents of their towns even knowing what police are using or how they’re using it. This means that dangerous technologies—technologies other cities have even banned—are being used without any transparency or accountability. San Franciscans advocated for and overwhelmingly supported a law that provides them with more knowledge of, and a voice in, what technologies the police use. Under the current law, if the SFPD wanted to use racist predictive policing algorithms that U.S. Senators are currently advising the Department of Justice to stop funding or if the SFPD wanted to buy up geolocation data being harvested from people’s cells phones and sold on the advertising data broker market, they have to let the public know and put it to a vote before the city’s democratically-elected governing body first. Proposition E would gut any meaningful democratic check on police’s acquisition and use of surveillance technologies.

It’s not just that these technologies could potentially harm San Franciscans by, for instance, directing armed police at them due to reliance on a faulty algorithm or putting already-marginalized communities at further risk of overpolicing and surveillance—it’s also important to note that studies find that these technologies just don’t work. Police often look to technology as a silver bullet to fight crime, despite evidence suggesting otherwise. Oversight over what technology the SFPD uses doesn’t just allow for scrutiny of discriminatory and biased policing, it also introduces a much-needed dose of reality. If police want to spend hundreds of thousands of dollars a year on software that has a success rate of .6% at predicting crime, they should have to go through a public process before they fork over taxpayer dollars. 

What Technology Would Proposition E Allow the Police to Use?

That's the thing—we don't know, and if Proposition E passes, we may never know. Today, if police decide to use a piece of surveillance technology, there is a process for sharing that information with the public. With Proposition E, that process won't happen until the technology has been in use for a full year. And if police abandon use of a technology before a year, we may never find out what technology police tried out and how they used it. Even though we don't know what technologies the SFPD are eyeing, we do know what technologies other police departments have been buying in cities around the country: AI-based “predictive policing,” and social media scanning tools are just two examples. And According to the City Attorney, Proposition E would even enable the SFPD to outfit surveillance tools such as drones and surveillance cameras with face recognition technology.

Why You Should Vote No on Proposition E

San Francisco, like many other cities, has its problems, but none of those problems will be solved by removing oversight over what technologies police spend our public money on and deploy in our neighborhoods—especially when so much police technology is known to be racially biased, invasive, or faulty. Voters should think about what San Francisco actually needs and how Proposion E is more likely to exacerbate the problems of police violence than it is to magically erase crime in the city. This is why we are urging a NO vote on Proposition E on the March 5 ballot.

San Francisco Police’s Live Surveillance Yields Almost 200 Hours of Spying–Including of Music Festivals

2 February 2024 at 16:21

A new report reveals that in just three months, from July 1 to September 30, 2023,  the San Francisco Police Department (SFPD) racked up 193 hours and 19 minutes of live access to non-city surveillance cameras. That means for the equivalent of 8 days, police sat behind a desk and tapped into hundreds of cameras, ostensibly including San Francisco’s extensive semi-private security camera networks, to watch city residents, workers, and visitors live. An article by the San Francisco Chronicle analyzing the report also uncovered that the SFPD tapped into these cameras to watch 42 hours of live footage during the Outside Lands music festival.

The city’s Board of Supervisors granted police permission to get live access to these cameras in September 2022 as part of a 15-month pilot program to see if allowing police to conduct widespread, live surveillance would create more safety for all people. However, even before this legislation’s passage, the SFPD covertly used non-city security cameras to monitor protests and other public events. In fact, police and the rich man who funded large networks of semi-private surveillance cameras both claimed publicly that the police department could easily access historic footage of incidents after the fact to help build cases, but could not peer through the cameras live. This claim was debunked by EFF and other investigators who revealed that police requested live access to semi-private cameras to monitor protests, parades, and public events—despite being the type of activity protected by the First Amendment.

When the Board of Supervisors passed this ordinance, which allowed police live access to non-city cameras for criminal investigations (for up to 24 hours after an incident) and for large-scale events, we warned that police would use this newfound power to put huge swaths of the city under surveillance—and we were unfortunately correct.

The most egregious example from the report is the 42 hours of live surveillance conducted during the Outside Lands music festival, which yielded five arrests for theft, pickpocketing, and resisting arrest—and only one of which resulted in the District Attorney’s office filing charges. Despite proponents’ arguments that live surveillance would promote efficiency in policing, in this case, it resulted in a massive use of police resources with little to show for it.

There still remain many unanswered questions about how the police are using these cameras. As the Chronicle article recognized:

…nearly a year into the experiment, it remains unclear just how effective the strategy of using private cameras is in fighting crime in San Francisco, in part because the Police Department’s disclosures don’t provide information on how live footage was used, how it led to arrests and whether police could have used other methods to make those arrests.

The need for greater transparency—and at minimum, for the police to follow all reporting requirements mandated by the non-city surveillance camera ordinance—is crucial to truly evaluate the impact that access to live surveillance has had on policing. In particular, the SFPD’s data fails to make clear how live surveillance helps police prevent or solve crimes in a way that footage after the fact does not. 

Nonetheless, surveillance proponents tout this report as showing that real-time access to non-city surveillance cameras is effective in fighting crime. Many are using this to push for a measure on the March 5, 2024 ballot, Proposition E, which would roll back police accountability measures and grant even more surveillance powers to the SFPD. In particular, Prop E would allow the SFPD a one-year pilot period to test out any new surveillance technology, without any use policy or oversight by the Board of Supervisors. As we’ve stated before, this initiative is bad all around—for policing, for civil liberties, and for all San Franciscans.

Police in San Francisco still don’t get it. They can continue to heap more time, money, and resources into fighting oversight and amassing all sorts of surveillance technology—but at the end of the day, this still won’t help combat the societal issues the city faces. Technologies touted as being useful in extreme cases will just end up as an oversized tool for policing misdemeanors and petty infractions, and will undoubtedly put already-marginalized communities further under the microscope. Just as it’s time to continue asking questions about what live surveillance helps the SFPD accomplish, it’s also time to oppose the erosion of existing oversight by voting NO on Proposition E on March 5. 

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

25 January 2024 at 13:14

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

play
Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

24 January 2024 at 14:09

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

❌
❌