Reading view

Flock Safety and Texas Sheriff Claimed License Plate Search Was for a Missing Person. It Was an Abortion Investigation.

New documents and court records obtained by EFF show that Texas deputies queried Flock Safety's surveillance data in an abortion investigation, contradicting the narrative promoted by the company and the Johnson County Sheriff that she was “being searched for as a missing person,” and that “it was about her safety.” 

The new information shows that deputies had initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman’s self-managed abortion, and consulted prosecutors about possibly charging her. 

Johnson County Sheriff Adam King repeatedly denied the automated license plate reader (ALPR) search was related to enforcing Texas's abortion ban, and Flock Safety called media accounts "false," "misleading" and "clickbait." However, according to a sworn affidavit by the lead detective, the case was in fact a death investigation in response to a report of an abortion, and deputies collected documentation of the abortion from the "reporting person," her alleged romantic partner. The death investigation remained open for weeks, with detectives interviewing the woman and reviewing her text messages about the abortion. 

The documents show that the Johnson County District Attorney's Office informed deputies that "the State could not statutorily charge [her] for taking the pill to cause the abortion or miscarriage of the non-viable fetus."

An excerpt from the police report, in which the detective talks about receiving evidence and calling the District Attorney's Office

An excerpt from the JCSO detective's sworn affidavit.

The records include previously unreported details about the case that shocked public officials and reproductive justice advocates across the country when it was first reported by 404 Media in May. The case serves as a clear warning sign that when data from ALPRs is shared across state lines, it can put people at risk, including abortion seekers. And, in this case, the use may have run afoul of laws in Washington and Illinois.

A False Narrative Emerges

Last May, 404 Media obtained data revealing the Johnson County Sheriff’s Office conducted a nationwide search of more than 83,000 Flock ALPR cameras, giving the reason in the search log: “had an abortion, search for female.” Both the Sheriff's Office and Flock Safety have attempted to downplay the search as akin to a search for a missing person, claiming deputies were only looking for the woman to “check on her welfare” and that officers found a large amount of blood at the scene – a claim now contradicted by the responding investigator’s affidavit. Flock Safety went so far as to assert that journalists and advocates covering the story intentionally misrepresented the facts, describing it as "misreporting" and "clickbait-driven." 

As Flock wrote of EFF's previous commentary on this case (bold in original statement): 

Earlier this month, there was purposefully misleading reporting that a Texas police officer with the Johnson County Sheriff’s Office used LPR “to target people seeking reproductive healthcare.” This organization is actively perpetuating narratives that have been proven false, even after the record has been corrected.

According to the Sheriff in Johnson County himself, this claim is unequivocally false.

… No charges were ever filed against the woman and she was never under criminal investigation by Johnson County. She was being searched for as a missing person, not as a suspect of a crime.

That sheriff has since been arrested and indicted on felony counts in an unrelated sexual harassment and whistleblower retaliation case. He has also been charged with aggravated perjury for allegedly lying to a grand jury. EFF filed public records requests with Johnson County to obtain a more definitive account of events.

The newly released incident report and affidavit unequivocally describe the case as a "death investigation" of a "non-viable fetus." These documents also undermine the claim that the ALPR search was in response to a medical emergency, since, in fact, the abortion had occurred more than two weeks before deputies were called to investigate. 

In recent years, anti-abortion advocates and prosecutors have increasingly attempted to use “fetal homicide” and “wrongful death” statutes – originally intended to protect pregnant people from violence – to criminalize abortion and pregnancy loss. These laws, which exist in dozens of states, establish legal personhood of fetuses and can be weaponized against people who end their own pregnancies or experience a miscarriage. 

In fact, a new report from Pregnancy Justice found that in just the first two years since the Supreme Court’s decision in Dobbs, prosecutors initiated at least 412 cases charging pregnant people with crimes related to pregnancy, pregnancy loss, or birth–most under child neglect, endangerment, or abuse laws that were never intended to target pregnant people. Nine cases included allegations around individuals’ abortions, such as possession of abortion medication or attempts to obtain an abortion–instances just like this one. The report also highlights how, in many instances, prosecutors use tangentially related criminal charges to punish people for abortion, even when abortion itself is not illegal.

By framing their investigation of a self-administered abortion as a “death investigation” of a “non-viable fetus,” Texas law enforcement was signaling their intent to treat the woman’s self-managed abortion as a potential homicide, even though Texas law does not allow criminal charges to be brought against an individual for self-managing their own abortion. 

The Investigator's Sworn Account

Over two days in April, the woman went through the process of taking medication to induce an abortion. Two weeks later, her partner–who would later be charged with domestic violence against her–reported her to the sheriff's office. 

The documents confirm that the woman was not present at the home when the deputies “responded to the death (Non-viable fetus).” As part of the investigation, officers collected evidence that the man had assembled of the self-managed abortion, including photographs, the FedEx envelope the medication arrived in, and the instructions for self-administering the medication. 

Another Johnson County official ran two searches through the ALPR database with the note "had an abortion, search for female," according to Flock Safety search logs obtained by EFF. The first search, which has not been previously reported, probed 1,295 Flock Safety networks–composed of 17,684 different cameras–going back one week. The second search, which was originally exposed by 404 Media, was expanded to a full month of data across 6,809 networks, including 83,345 cameras. Both searches listed the same case number that appears on the death investigation/incident report obtained by EFF. 

After collecting the evidence from the woman’s partner, the investigators say they consulted the district attorney’s office, only to be told they could not press charges against the woman. 

An excerpt from the detective's affidavit about investigating the abortion

An excerpt from the JCSO detective's sworn affidavit.

Nevertheless, when the subject showed up at the Sheriff’s office a week later, officers were under the impression that she came to “to tell her side of the story about the non-viable fetus.” They interviewed her, inspected text messages about the abortion on her phone, and watched her write a timeline of events. 

Only after all that did they learn that she actually wanted to report a violent assault by her partner–the same individual who had called the police to report her abortion. She alleged that less than an hour after the abortion, he choked her, put a gun to her head, and made her beg for her life. The man was ultimately charged in connection with the assault, and the case is ongoing. 

This documented account runs completely counter to what law enforcement and Flock have said publicly about the case. 

Johnson County Sheriff Adam King told 404 media: "Her family was worried that she was going to bleed to death, and we were trying to find her to get her to a hospital.” He later told the Dallas Morning News: “We were just trying to check on her welfare and get her to the doctor if needed, or to the hospital."

The account by the detective on the scene makes no mention of concerned family members or a medical investigator. To the contrary, the affidavit says that they questioned the man as to why he "waited so long to report the incident," and he responded that he needed to "process the event and call his family attorney." The ALPR search was recorded 2.5 hours after the initial call came in, as documented in the investigation report.

The Desk Sergeant's ReportOne Month Later

EFF obtained a separate "case supplemental report" written by the sergeant who says he ran the May 9 ALPR searches. 

The sergeant was not present at the scene, and his account was written belatedly on June 5, almost a month after the incident and nearly a week after 404 Media had already published the sheriff’s alternative account of the Flock Safety search, kicking off a national controversy. The sheriff's office provided this sergeant's report to Dallas Morning News. 

In the report, the sergeant claims that the officers on the ground asked him to start "looking up" the woman due to there being "a large amount of blood" found at the residencean unsubstantiated claim that is in conflict with the lead investigator’s affidavit. The sergeant repeatedly expresses that the situation was "not making sense." He claims he was worried that the partner had hurt the woman and her children, so "to check their welfare," he used TransUnion's TLO commercial investigative database system to look up her address. Once he identified her vehicle, he ran the plate through the Flock database, returning hits in Dallas.

A data table showing the log of searches

Two abortion-related searches in the JCSO's Flock Safety ALPR audit log

The sergeant's report, filed after the case attracted media attention, notably omits any mention of the abortion at the center of the investigation, although it does note that the caller claimed to have found a fetus. The report does not explain, or even address, why the sergeant used the phrase "had an abortion, search for female” as the official reason for the ALPR searches in the audit log. 

It's also unclear why the sergeant submitted the supplemental report at all, weeks after the incident. By that time, the lead investigator had already filed a sworn affidavit that contradicted the sergeant's account. For example, the investigator, who was on the scene, does not describe finding any blood or taking blood samples into evidence, only photographs of what the partner believed to be the fetus. 

One area where they concur: both reports are clearly marked as a "death investigation." 

Correcting the Record

Since 404 Media first reported on this case, King has perpetuated the false narrative, telling reporters that the woman was never under investigation, that officers had not considered charges against her, and that "it was all about her safety."

But here are the facts: 

  • The reports that have been released so far describe this as a death investigation.
  • The lead detective described himself as "working a death investigation… of a non-viable fetus" at the time he interviewed the woman (a week after the ALPR searches).
  • The detective wrote that they consulted the district attorney's office about whether they could charge her for "taking the pill to cause the abortion or miscarriage of the non-viable fetus." They were told they could not.
  • Investigators collected a lot of data, including photos and documentation of the abortion, and ran her through multiple databases. They even reviewed her text messages about the abortion. 
  • The death investigation was open for more than a month.

The death investigation was only marked closed in mid-June, weeks after 404 Media's article and a mere days before the Dallas Morning News published its report, in which the sheriff inaccurately claimed the woman "was not under investigation at any point."

Flock has promoted this unsupported narrative on its blog and in multimedia appearances. We did not reach out to Flock for comment on this article, as their communications director previously told us the company will not answer our inquiries until we "correct the record and admit to your audience that you purposefully spread misinformation which you know to be untrue" about this case. 

Consider the record corrected: It turns out the truth is even more damning than initially reported.

The Aftermath

In the aftermath of the original reporting, government officials began to take action. The networks searched by Johnson County included cameras in Illinois and Washington state, both states where abortion access is protected by law. Since then: 

  • The Illinois Secretary of State has announced his intent to “crack down on unlawful use of license plate reader data,” and urged the state’s Attorney General to investigate the matter. 
  • In California, which also has prohibitions on sharing ALPR out of state and for abortion-ban enforcement, the legislature cited the case in support of pending legislation to restrict ALPR use.
  • Ranking Members of the House Oversight Committee and one of its subcommittees launched a formal investigation into Flock’s role in “enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans.” 
  • Senator Ron Wyden secured a commitment from Flock to protect Oregonians' data from out-of-state immigration and abortion-related queries.

In response to mounting pressure, Flock announced a series of new features supposedly designed to prevent future abuses. These include blocking “impermissible” searches, requiring that all searches include a “reason,” and implementing AI-driven audit alerts to flag suspicious activity. But as we've detailed elsewhere, these measures are cosmetic at best—easily circumvented by officers using vague search terms or reusing legitimate case numbers. The fundamental architecture that enabled the abuse remains unchanged. 

Meanwhile, as the news continued to harm the company's sales, Flock CEO Garrett Langley embarked on a press tour to smear reporters and others who had raised alarms about the usage. In an interview with Forbes, he even doubled down and extolled the use of the ALPR in this case. 

So when I look at this, I go “this is everything’s working as it should be.” A family was concerned for a family member. They used Flock to help find her, when she could have been unwell. She was physically okay, which is great. But due to the political climate, this was really good clickbait.

Nothing about this is working as it should, but it is working as Flock designed. 

The Danger of Unchecked Surveillance

A pair of Flock Safety cameras on a pole, with a solar panel

Flock Safety ALPR cameras

This case reveals the fundamental danger of allowing companies like Flock Safety to build massive, interconnected surveillance networks that can be searched across state lines with minimal oversight. When a single search query can access more than 83,000 cameras spanning almost the entire country, the potential for abuse is staggering, particularly when weaponized against people seeking reproductive healthcare. 

The searches in this case may have violated laws in states like Washington and Illinois, where restrictions exist specifically to prevent this kind of surveillance overreach. But those protections mean nothing when a Texas deputy can access cameras in those states with a few keystrokes, without external review that the search is legal and legitimate under local law. In this case, external agencies should have seen the word "abortion" and questioned the search, but the next time an officer is investigating such a case, they may use a more vague or misleading term to justify the search. In fact, it's possible it has already happened. 

ALPRs were marketed to the public as tools to find stolen cars and locate missing persons. Instead, they've become a dragnet that allows law enforcement to track anyone, anywhere, for any reason—including investigating people's healthcare decisions. This case makes clear that neither the companies profiting from this technology nor the agencies deploying it can be trusted to tell the full story about how it's being used.

States must ban law enforcement from using ALPRs to investigate healthcare decisions and prohibit sharing data across state lines. Local governments may try remedies like reducing data retention period to minutes instead of weeks or months—but, really, ending their ALPR programs altogether is the strongest way to protect their most vulnerable constituents. Without these safeguards, every license plate scan becomes a potential weapon against a person seeking healthcare.

  •  

#StopCensoringAbortion: What We Learned and Where We Go From Here

This is the tenth and final installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When we launched Stop Censoring Abortion, our goals were to understand how social media platforms were silencing abortion-related content, gather data and lift up stories of censorship, and hold social media companies accountable for the harm they have caused to the reproductive rights movement.

Thanks to nearly 100 submissions from educators, advocates, clinics, researchers, and individuals around the world, we confirmed what many already suspected: this speech is being removed, restricted, and silenced by platforms at an alarming rate. Together, our findings paint a clear picture of censorship in action: platforms’ moderation systems are not only broken, but are actively harming those seeking and sharing vital reproductive health information.

Here are the key lessons from this campaign: what we uncovered, how platforms can do better, and why pushing back against this censorship matters more now than ever.

Lessons Learned

Across our submissions, we saw systemic over-enforcement, vague and convoluted policies, arbitrary takedowns, sudden account bans, and ignored appeals. And in almost every case we reviewed, the posts and accounts in question did not violate any of the platform’s stated rules.

The most common reason Meta gave for removing abortion-related content was that it violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” But most of the content submitted simply provided factual, educational information that clearly did not violate those rules. As we saw in the M+A Hotline’s case, this kind of misclassification deprives patients, advocates, and researchers of reliable information, and chills those trying to provide accurate and life-saving reproductive health resources.

In one submission, we even saw posts sharing educational abortion resources get flagged under the “Dangerous Organizations and Individuals” policy, a rule intended to prevent terrorism and criminal activity. We’ve seen this policy cause problems in the past, but in the reproductive health space, treating legal and accurate information as violent or unlawful only adds needless stigma and confusion.

Meta’s convoluted advertising policies add another layer of harm. There are specific, additional rules users must navigate to post paid content about abortion. While many of these rules still contain exceptions for purely educational content, Meta is vague about how and when those exceptions apply. And ads that seem like they should have been allowed were frequently flagged under rules about “prescription drugs” or “social issues.” This patchwork of unclear policies forces users to second-guess what content they can post or promote for fear of losing access to their networks.

In another troubling trend, many of our submitters reported experiencing shadowbanning and de-ranking, where posts weren’t removed but were instead quietly suppressed by the algorithm. This kind of suppression leaves advocates without any notice, explanation, or recourse—and severely limits their ability to reach people who need the information most.  

Many users also faced sudden account bans without warning or clear justification. Though Meta’s policies dictate that an account should only be disabled or removed after “repeated” violations, organizations like Women Help Women received no warning before seeing their critical connections cut off overnight.

Finally, we learned that Meta’s enforcement outcomes were deeply inconsistent. Users often had their appeals denied and accounts suspended until someone with insider access to Meta could intervene. For example, the Red River’s Women’s Clinic, RISE at Emory, and Aid Access each had their accounts restored only after press attention or personal contacts stepped in. This reliance on backchannels underscores the inequity in Meta’s moderation processes: without connections, users are left unfairly silenced.

It’s Not Just Meta

Most of our submissions detailed suppression that took place on one of Meta’s platforms (Facebook, Instagram, Whatsapp and Threads), so we decided to focus our analysis on Meta’s moderation policies and practices. But we should note that this problem is by no means confined to Meta.

On LinkedIn, for example, Stephanie Tillman told us about how she had her entire account permanently taken down, with nothing more than a vague notice that she had violated LinkedIn’s User Agreement. When Stephanie reached out to ask what violation she committed, LinkedIn responded that “due to our Privacy Policy we are unable to release our findings,” leaving her with no clarity or recourse. Stephanie suspects that the ban was related to her work with Repro TLC, an advocacy and clinical health care organization, and/or her posts relating to her personal business, Feminist Midwife LLC. But LinkedIn’s opaque enforcement meant she had no way to confirm these suspicions, and no path to restoring her account.

Screenshot submitted by Stephanie Tillman to EFF (with personal information redacted by EFF)

And over on Tiktok, Brenna Miller, a creator who works in health care and frequently posts about abortion, posted a video of her “unboxing” an abortion pill care package from Carafem. Though Brenna’s video was factual and straightforward, TikTok removed it, saying that she had violated TikTok’s Community Guidelines.

Screenshot submitted by Brenna Miller to EFF

Brenna appealed the removal successfully at first, but a few weeks later the video was permanently deleted—this time, without any explanation or chance to appeal again.

Brenna’s far from the only one experiencing censorship on TikTok. Even Jessica Valenti, award-winning writer, activist, and author of the Abortion Every Day newsletter, recently had a video taken down from TikTok for violating its community guidelines, with no further explanation. The video she posted was about the Trump administration calling IUDs and the Pill ‘abortifacients.’ Jessica wrote:

Which rule did I break? Well, they didn’t say: but I wasn’t trying to sell anything, the video didn’t feature nudity, and I didn’t publish any violence. By process of elimination, that means the video was likely taken down as "misinformation." Which is…ironic.

These are not isolated incidents. In the Center for Intimacy Justice’s survey of reproductive rights advocates, health organizations, sex educators, and businesses, 63% reported having content removed on Meta platforms, 55% reported the same on TikTok, and 66% reported having ads rejected from Google platforms (including YouTube). Clearly, censorship of abortion-related content is a systemic problem across platforms.

How Platforms Can Do Better on Abortion-Related Speech

Based on our findings, we're calling on platforms to take these concrete steps to improve moderation of abortion-related speech:

  • Publish clear policies. Users should not have to guess whether their speech is allowed or not.
  • Enforce rules consistently. If a post does not violate a written standard, it should not be removed.
  • Provide real transparency. Enforcement decisions must come with clear, detailed explanations and meaningful opportunities to appeal.
  • Guarantee functional appeals. Users must be able to challenge wrongful takedowns without relying on insider contacts.
  • Expand human review. Reproductive rights is a nuanced issue and can be too complex to be left entirely to error-prone automated moderation systems.

Practical Tips for Users

Don’t get it twisted: Users should not have to worry about their posts being deleted or their accounts getting banned when they share factual information that doesn’t violate platform policies. The onus is on platforms to get it together and uphold their commitments to users. But while platforms continue to fail, we’ve provided some practical tips to reduce the risk of takedowns, including:

  • Consider limiting commonly flagged words and images. Posts with pill images or certain keyword combinations (like “abortion,” “pill,” and “mail”) were often flagged.
  • Be as clear as possible. Vague phrases like “we can help you get what you need” might look like drug sales to an algorithm.
  • Be careful with links. Direct links to pill providers were often flagged. Spell out the links instead.
  • Expect stricter rules for ads. Boosted posts face harsher scrutiny than regular posts.
  • Appeal wrongful enforcement decisions. Requesting an appeal might get you a human moderator or, even better, review from Meta’s independent Oversight Board.
  • Document everything and back up your content. Screenshot all communications and enforcement decisions so you can share them with the press or advocacy groups, and export your data regularly in case your account vanishes overnight.

Keep Fighting

Abortion information saves lives, and social media is the primary—and sometimes only—way for advocates and providers to get accurate information out to the masses. But now we have evidence that this censorship is widespread, unjustified, and harming communities who need access to this information most.

Platforms must be held accountable for these harms, and advocates must continue to speak out. The more we push back—through campaigns, reporting, policy advocacy, and user action—the harder it will be for platforms to look away.

So keep speaking out, and keep demanding accountability. Platforms need to know we're paying attention—and we won't stop fighting until everyone can share information about abortion freely, safely, and without fear of being silenced.

This is the tenth and final post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion.  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

Tips to Protect Your Posts About Reproductive Health From Being Removed

This is the ninth installment in a blog series documenting EFF’s findings from the Stop Censoring Abortion campaign. You can read additional posts here.  

Meta has been getting content moderation wrong for years, like most platforms that host user-generated content. Sometimes it’s a result of deliberate design choices—privacy rollbacks, opaque policies, features that prioritize growth over safety—made even when the company knows that those choices could negatively impact users. Other times, it’s simply the inevitable outcome of trying to govern billions of posts with a mix of algorithms and overstretched human reviewers. Importantly, users shouldn’t have to worry about their posts being deleted or their accounts getting banned when they share factual health information that doesn’t violate the platforms' policies. But knowing more about what the algorithmic moderation is likely to flag can help you to avoid its mistakes. 

We analyzed the roughly one-hundred survey submissions we received from social media users in response to our Stop Censoring Abortion campaign. Their stories revealed some clear patterns: certain words, images, and phrases seemed to trigger takedowns, even when posts didn’t come close to violating Meta’s rules. 

For example, your post linking to information on how people are accessing abortion pills online clearly is not an offer to buy or sell pills, but an algorithm, or a human content reviewer who doesn’t know for sure, might wrongly flag it for violating Meta’s policies on promoting or selling “restricted goods.” 

That doesn’t mean you’re powerless. For years, people have used “algospeak”—creative spelling, euphemisms, or indirection—to sidestep platform filters. Abortion rights advocates are now forced into similar strategies, even when their speech is perfectly legal. It’s not fair, but it might help you keep your content online. Here are some things we learned from our survey: 

Practical Tips to Reduce the Risk of Takedowns 

While traditional social media platforms can help people reach larger audiences, using them also generally means you have to hand over control of what you and others are able to see to the people who run the company. This is the deal that large platforms offer—and while most of us want platforms to moderate some content (even if that moderation is imperfect), current systems of moderation often reflect existing societal power imbalances and impact marginalized voices the most. 

There are ways companies and governments could better balance the power between users and platforms. In the meantime, there are steps you can take right now to break the hold these platforms have:   

  • Images and keywords matter. Posts with pill images, or accounts with “pill” in their names, were flagged often—even when the posts weren’t offering to sell medication. Before posting, consider whether you need to include an image of, or the word “pill,” or whether there’s another way to communicate your message. 
  • Clarity beats vagueness. Saying “we can help you find what you need” or “contact me for more info” might sound innocuous, but to an algorithm, it can look like an offer to sell drugs. Spell out what kind of support you do and don’t provide—for example: “We can talk through options and point you toward trusted resources. We don’t provide medical services or medication.” 
  • Be careful with links. Direct links to organizations or services that provide abortion pills were often flagged, even if the organizations operate legally. Instead of linking, try spelling out the name of the site or account. 
  • Certain word combos are red flags. Posts that included words like “mifepristone,” “abortion,” and “mail” together were frequently removed. You may still want to use them—they’re accurate and important—but know they make your post more likely to be flagged. 

Alternatives and Backups 

Big platforms give you reach, but they also set the rules—and those rules usually favor corporate interests over human rights. You don’t have to accept that as the only way forward: 

  • Keep a backup. Export your data regularly so you’re not left empty-handed if your account disappears overnight. 
  • Build your own space. Hosting a website isn’t free, but it puts you in control. 
  • Push for interoperability. Imagine being able to take your audience with you when you leave a platform. That’s the future we should be fighting for. (For more on interoperability and Meta, check out this video where Cory Doctorow explains what an interoperable Facebook would look like.) 

Protect Your Privacy 

If you’re working in abortion access—whether as a provider, activist, or volunteer—your privacy and security matter. The same is true for patients. Check out EFF’s Surveillance Self-Defense for tailored guides. Look at resources from groups like Digital Defense Fund and learn how location tracking tools can endanger abortion access. If you run an organization, consider some of the ways you can minimize what information you collect about patients, clients, or customers, in our guide to Online Privacy for Nonprofits. 

Platforms like Meta insist they want to balance free expression and safety, but their blunt systems consistently end up reinforcing existing inequalities—silencing the very people who most need to be heard. Until they do better, it’s on us to protect ourselves, share our stories, and keep building the kind of internet that respects our rights. 

This is the ninth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion 

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

Platforms Have Failed Us on Abortion Content. Here's How They Can Fix It.

This is the eighth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

In our Stop Censoring Abortion series, we’ve documented the many ways that reproductive rights advocates have faced arbitrary censorship on Meta platforms. Since social media is the primary—and sometimes the only—way that providers, advocates, and communities can safely and effectively share timely and accurate information about abortion, it’s vitally important that platforms take steps to proactively protect this speech.

Yet, even though Meta says its moderation policies allow abortion-related speech, its enforcement of those policies tells a different story. Posts are being wrongfully flagged, accounts are disappearing without warning, and important information is being removed without clear justification.

So what explains the gap between Meta’s public commitments and its actions? And how can we push platforms to be better—to, dare we say, #StopCensoringAbortion?

After reviewing nearly one-hundred submissions and speaking with Meta to clarify their moderation practices, here’s what we’ve learned.

Platforms’ Editorial Freedom to Moderate User Content

First, given the current landscape—with some states trying to criminalize speech about abortion—you may be wondering how much leeway platforms like Facebook and Instagram have to choose their own content moderation policies. In other words, can social media companies proactively commit to stop censoring abortion?

The answer is yes. Social media companies, including Meta, TikTok, and X, have the constitutionally protected First Amendment right to moderate user content however they see fit. They can take down posts, suspend accounts, or suppress content for virtually any reason.

The Supreme Court explicitly affirmed this right in 2023 in Moody v. Netchoice, holding that social media platforms, like newspapers, bookstores, and art galleries before them, have the First Amendment right to edit the user speech that they host and deliver to other users on their platforms. The Court also established that the government has a very limited role in dictating what social media platforms must (or must not) publish. This editorial discretion, whether granted to individuals, traditional press, or online platforms, is meant to protect these institutions from government interference and to safeguard the diversity of the public sphere—so that important conversations and movements like this one have the space to flourish.

Meta’s Broken Promises

Unfortunately, Meta is failing to meet even these basic standards. Again and again, its policies say one thing while its actual enforcement says another.

Meta has stated its intent to allow conversations about abortion to take place on its platforms. In fact, as we’ve written previously in this series, Meta has publicly insisted that posts with educational content about abortion access should not be censored, even admitting in several public statements to moderation mistakes and over-enforcement. One spokesperson told the New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space. . . . That’s why we allow posts and ads about, discussing and debating abortion.”

Meta’s platform policies largely reflect this intent. But as our campaign reveals, Meta’s enforcement of those policies is wildly inconsistent. Time and again, users—including advocacy organizations, healthcare providers, and individuals sharing personal stories—have had their content taken down even though it did not actually violate any of Meta’s stated guidelines. Worse, they are often left in the dark about what happened and how to fix it.

Arbitrary enforcement like this harms abortion activists and providers by cutting them off from their audiences, wasting the effort they spend creating resources and building community on these platforms, and silencing their vital reproductive rights advocacy. And it goes without saying that it hurts users, who need access to timely, accurate, and sometimes life-saving information. At a time when abortion rights are under attack, platforms with enormous resources—like Meta—have no excuse for silencing this important speech.  

Our Call to Platforms

Our case studies have highlighted that when users can’t rely on platforms to apply their own rules fairly, the result is a widespread chilling effect on online speech. That’s why we are calling on Meta to adopt the following urgent changes.

1. Publish clear and understandable policies.

Too often, platforms’ vague rules force users to guess what content might be flagged in order to avoid shadowbanning or worse, leading to needless self-censorship. To prevent this chilling effect, platforms should strive to offer users the greatest possible transparency and clarity on their policies. The policies should be clear enough that users know exactly what is allowed and what isn’t so that, for example, no one is left wondering how exactly a clip of women sharing their abortion experiences could be mislabeled as violent extremism.

2. Enforce rules consistently and fairly.

If content doesn’t violate a platform’s stated policies, it should not be removed. And, per Meta’s own policies, an account should not be suspended for abortion-related content violations if it has not received any prior warnings or “strikes.” Yet as we’ve seen throughout this campaign, abortion advocates repeatedly face takedowns or even account suspensions of posts that fall entirely within Meta’s Community Standards. On such a massive scale, this selective enforcement erodes trust and chills entire communities from participating in critical conversations. 

3. Provide meaningful transparency in enforcement actions.

When content is removed, Meta tends to give vague, boilerplate explanations—or none at all. Instead, users facing takedowns or suspensions deserve detailed and accurate explanations that state the policy violated, reflect the reasoning behind the actual enforcement decision, and ways to appeal the decision. Clear explanations are key to preventing wrongful censorship and ensuring that platforms remain accountable to their commitments and to their users.

4. Guarantee functional appeals.

Every user deserves a real chance to challenge improper enforcement decisions and have them reversed. But based on our survey responses, it seems Meta’s appeals process is broken. Many users reported that they do not receive responses to appeals, even when the content did not violate Meta’s policies, and thus have no meaningful way to challenge takedowns. Alarmingly, we found that a user’s best (and sometimes only) chance at success is to rely on a personal connection at Meta to right wrongs and restore content. This is unacceptable. Users should have a reliable and efficient appeal process that does not depend on insider access.   

5. Expand human review.

Finally, automated systems cannot always handle the nuance of sensitive issues like reproductive health and advocacy. They misinterpret words, miss important cultural or political context, and wrongly flag legitimate advocacy as “dangerous.” Therefore, we call upon platforms to expand the role that human moderators play in reviewing auto-flagged content violations—especially when posts involve sensitive healthcare information or political expression.

Users Deserve Better

Meta has already made the choice to allow speech about abortion on its platforms, and it has not hesitated to highlight that commitment whenever it has faced scrutiny. Now it’s time for Meta to put its money where its mouth is.

Users deserve better than a system where rules are applied at random, appeals go nowhere, and vital reproductive health information is needlessly (or negligently) silenced. If Meta truly values free speech, it must commit to moderating with fairness, transparency, and accountability.

This is the eighth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion   

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

The Abortion Hotline Meta Wants to Go Dark

This is the sixth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When we started our Stop Censoring Abortion campaign, we heard from activists, advocacy organizations, researchers, and even healthcare providers who had all experienced having abortion-related content removed or suppressed on social media. One of the submissions we received was from an organization called the Miscarriage and Abortion Hotline.

The Miscarriage and Abortion Hotline (M+A Hotline) formed in 2019, is staffed by a team of healthcare providers who wanted to provide free and confidential “expert advice on various aspects of miscarriage and abortion, ensuring individuals receive accurate information and compassionate support throughout their journey.” By 2022, the hotline was receiving between 25 to 45 calls and texts a day. 

Like many reproductive health, rights, and justice groups, the M+A Hotline is active on social media, sharing posts that affirm the voices and experiences of abortion seekers, assert the safety of medication abortion, and spread the word about the expert support that the hotline offers. However, in late March of this year, the M+A Hotline’s Instagram suddenly had numerous posts taken down and was hit with restrictions that prevented the account from starting or joining livestreams or creating ads until June 25, 2025.

Screenshots provided to EFF from M+A Hotline

The reason behind the restrictions and takedowns, according to Meta, was that the M+A Hotline’s Instagram account failed to follow Meta’s guidelines on the sale of illegal or regulated goods. The “guidelines” refer to Meta’s Community Standards which dictate the types of content that are allowed on Facebook, Instagram, Messenger, and Threads. But according to Meta, it is not against these Community Standards to provide guidance on how to legally access pharmaceutical drugs, and this is treated differently than an offer to buy, sell, or trade pharmaceuticals (though there are additional compliance requirements for paid ads). 

Under these rules, the M+A Hotline’s content should have been fine: The Hotline does not sell medication abortion and simply educates on the efficacy and safety of medication abortion while providing guidance on how abortion seekers could legally access the pills. Despite this, around 10 posts from the account were removed by Instagram, none of which were ads.

For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.

In a letter to Amnesty International in February 2024, Meta publicly clarified that organic content on its platforms that educates users about medication abortion is not in violation of the Community Standards. The company claims that the policies are “based on feedback from people and the advice of experts in fields like technology, public safety and human rights.” The Community Standards are thorough and there are sections covering everything from bullying and harassment to account integrity to restricted goods and services. Notably, within the several webpages that make up the Community Standards, there are very few mentions of the words “abortion” and “reproductive health.” For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.

Screenshots provided to EFF from M+A Hotline

Not only were posts removed, but even after further review, many were not restored. The M+A Hotline was once again told that their content violates the Community Standards on drugs. While it’s understandable that moderation systems may make mistakes, it’s unacceptable for those mistakes to be repeated consistently with little transparency or direct communication with the users whose speech is being restricted and erased. This problem is only made worse by lack of helpful recourse. As seen here, even when users request review and identify these moderation errors, Meta may still refuse to restore posts that are permitted under the Community Standards.

The removal of the M+A Hotline’s educational content demonstrates that Meta must be more accurate, consistent, and transparent in the enforcement of their Community Standards, especially in regard to reproductive health information. Informing users that medical professionals are available to support those navigating a miscarriage or abortion is plainly not an attempt to buy or sell pharmaceutical drugs. Meta must clearly defineand then fairly enforce–what is and isn’t permitted under its Standards. This includes ensuring there is a meaningful way to quickly rectify any moderation errors through the review process. 

At a time when attacks on online access to information—and particularly abortion information—are intensifying, Meta must not exacerbate the problem by silencing healthcare providers and suppressing vital health information. We must all continue to fight back against online censorship.

 This is the sixth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

  •  

Meta is Removing Abortion Advocates' Accounts Without Warning

This is the fifth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

When the team at Women Help Women signed into Instagram last winter, they were met with a distressing surprise: without warning, Meta had disabled their account. The abortion advocacy non-profit organization found itself suddenly cut off from its tens of thousands of followers and with limited recourse. Meta claimed Women Help Women had violated its Community Standards on “guns, drugs, and other restricted goods,” but the organization told EFF it uses Instagram only to communicate about safe abortion practices, including sharing educational content and messages aimed at reducing stigma. Eventually, Women Help Women was able to restore its account—but only after launching a public campaign and receiving national news coverage. 

Unfortunately, Women Help Women’s experience is not unique. Around a quarter of our Stop Censoring Abortion campaign submissions reported that their entire account or page had been disabled or taken down after sharing abortion information—primarily on Meta platforms. This troubling pattern indicates that the censorship crisis goes beyond content removal. Accounts providing crucial reproductive health information are disappearing, often without warning, cutting users off from their communities and followers entirely.

What's worse, Meta appears to be imposing these negative account actions without clearly adhering to its own enforcement policies. Meta’s own Transparency Center stipulates that an account should receive multiple Community Standards violations or warnings before it is restricted or disabled. Yet many affected users told EFF they experienced negative account actions without any warning at all, or after only one alleged violation (many of which were incorrectly flagged, as we’ve explained elsewhere in this series). 

While Meta clearly has the right to remove accounts from its platforms, disabling or banning an account is an extreme measure. It completely silences a user, cutting off communication with their followers and preventing them from sharing any information, let alone abortion information. Because of this severity, Meta should be extremely careful to ensure fairness and accuracy when disabling or removing accounts. Rules governing account removal should be transparent and easy to understand, and Meta must enforce these policies consistently across different users and categories of content. But as our Stop Censoring Abortion results demonstrate, this isn't happening for many accounts sharing abortion information.  

Meta's Maze of Enforcement Policies 

If you navigate to Meta’s Transparency Center, you’ll find a page titled “How Meta enforces its policies.” This page contains a web of intersecting policies on when Meta will restrict accounts, disable accounts, and remove pages and groups. These policies overlap but don’t directly refer to each other, making it trickier for users to piece together how enforcement happens. 

At the heart of Meta's enforcement process is a strike system. Users receive strikes for posting content that violates Meta’s Community Standards. But not all Community Standards violations result in strikes, and whether Meta applies one depends on the “severity of the content” and the “context in which it was shared.” Meta provides little additional guidance on what violations are severe enough to amount to a strike or how context affects this assessment.  

According to Meta's Restricting Accounts policy, for most violations, 1 strike should only result in a warning—not any action against the account. How additional strikes affect an account differs between Facebook and Instagram (but Meta provides no specific guidance for Threads). Facebook relies on a progressive system, where additional strikes lead to increasing restrictions. Enforcement on Instagram is more opaque and leaves more to Meta’s discretion. Meta still counts strikes on Instagram, but it does not follow the same escalating structure of restrictions as it does on Facebook. 

Despite some vagueness in these policies, Meta is quite clear about one thing: On both Facebook and Instagram, an account should only be disabled or removed after “repeated” violations, warnings, or strikes. Meta states this multiple times throughout its enforcement policies. Its Disabling Accounts policy suggests that generally, an account needs to receive at least 5 strikes for Meta to disable or remove it from the platform. The only caveat is for severe violations, such as posting child sexual exploitation content or violating the dangerous individuals and organizations policy. In those extreme cases, Meta may disable an account after just one violation. 

Meta’s Practices Don’t Match Its Policies 

Our survey results detailed a different reality. Many survey respondents told EFF that Meta disabled or removed their account without warning and without indication that they had received repeated strikes.  It’s important to note that Meta does not have a unique enforcement process for prescription drug or abortion-related content. When EFF asked Meta about this issue, Meta confirmed that "enforcement actions on prescription drugs are subject to Meta's standard enforcement policies.” 

So here are a couple other possible explanations for this disconnect—each of them troubling in their own way:

Meta is Ignoring Its Own Strike System 

If Meta is taking down accounts without warning or after only one alleged Community Standards violation, the company is failing to follow its own strike system. This makes enforcement arbitrary and denies users the opportunity for correction that Meta's system supposedly provides. It’s also especially problematic for abortion advocates, given that Meta has been incorrectly flagging educational abortion content as violating its Community Standards. This means that a single content moderation error could result not only in the post coming down, but the entire account too.  

This may be what happened to Emory University’s RISE Center for Reproductive Health Research (a story we described in more detail earlier in this series). After sharing an educational post about mifepristone, RISE’s Instagram account was suddenly disabled. RISE received no earlier warnings from Meta before its account went dark. When RISE was finally able to get back into its account, it discovered only that this single post had been flagged. Again, according to Meta's own policies, one strike should only result in a warning. But this isn’t what happened here. 

Similarly, the Tamtang Foundation, an abortion advocacy organization based in Thailand, had its Facebook account suddenly disabled earlier this year. Tamtang told EFF it had received a warning on only one flagged post that it had posted 10 months prior to its account being taken down. It received none of the other progressive strike restrictions Meta claims to apply Facebook accounts. 

Meta is Misclassifying Educational Content as "Extreme Violations" 

If Meta is accurately following its strike policy but still disabling accounts after only one violation, this points to an even more concerning possibility. Meta’s content moderation system may be categorizing educational abortion information as severe enough to warrant immediate disabling, treating university research posts and clinic educational materials as equivalent to child exploitation or terrorist content.  

This would be a fundamental and dangerous mischaracterization of legitimate medical information, and it is, we hope, unlikely. But it’s unfortunately not outside the realm of possibility. We already wrote about a similar disturbing mischaracterization earlier in this series. 

Users Are Unknowingly Receiving Multiple Strikes 

Finally, Meta may be giving users multiple strikes without notifying them. This raises several serious concerns.

First is the lack of transparency. Meta explicitly states in its "Restricting Accounts" policy that it will notify users when it “remove[s] your content or add[s] restrictions to your account, Page or group.” This policy is failing if users are not receiving these notifications and are not made aware there’s an issue with their account. 

It may also mean that Meta’s policies themselves are too vague to provide meaningful guidance to users. This lack of clarity is harmful. If users don’t know what's happening to their accounts, they can’t appeal Meta’s content moderation decisions, adjust their content, or understand Meta's enforcement boundaries moving forward. 

Finally—and most troubling—if Meta is indeed disabling accounts that share abortion information for receiving multiple violations, this points to an even broader censorship crisis. Users may not be aware just how many informational abortion-related posts are being incorrectly flagged and counted as strikes. This is especially concerning given that Meta places a one-year time limit on strikes, meaning the multiple alleged violations could not have accumulated over multiple years.  

The Broader Censorship Crisis 

These account suspensions represent just one facet of Meta's censorship of reproductive health information documented by our Stop Censoring Abortion campaign. When combined with post removals, shadowbanning, and content restrictions, the message is clear: Meta platforms are increasingly unfriendly environments for abortion advocacy and education. 

If Meta wants to practice what it preaches, then it must reform its enforcement policies to provide clear, transparent guidelines on when and how strikes apply, and then consistently and accurately apply those policies. Accounts should not be taken down for only one alleged violation when the policies state otherwise.  

The stakes couldn't be higher. In a post-Roe landscape where access to accurate reproductive health information is more crucial than ever, Meta's enforcement system is silencing the very voices communities need most. 

This is the fifth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion  

Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people. 

  •  

Going Viral vs. Going Dark: Why Extremism Trends and Abortion Content Gets Censored

This is the fourth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

One of the goals of our Stop Censoring Abortion campaign was to put names, stories, and numbers to the experiences we’d been hearing about: people and organizations having their abortion-related content – or entire accounts – removed or suppressed on social media. In reviewing survey submissions, we found that multiple users reported experiencing shadowbanning. Shadowbanning (or “deranking”) is widely experienced and reported by content creators across various social media platforms, and it’s a phenomenon that those who create content about abortion and sexual and reproductive health know all too well.

Shadowbanning is the often silent suppression of certain types of content or creators in your social media feeds. It’s not something that a U.S-based creator is notified about, but rather something they simply find out when their posts stop getting the same level of engagement that they’re used to, or when people are unable to easily find their account using the platform’s search function. Essentially, it is when a platform or its algorithm decides that other users should see less of a creator or specific topic. Many platforms deny that shadowbanning exists; they will often blame reduced reach of posts on ‘bugs’ in the algorithm. At the same time, companies like Meta have admitted that content is ranked, but much about how this ranking system works remains unknown.  Meta says that there are five content categories that while allowed on its platforms, “may not be eligible for recommendation.” Content discussing abortion pills may fall under the umbrella of “Content that promotes the use of certain regulated products,” but posts that simply affirm abortion as a valid reproductive decision or are of storytellers sharing their experiences don’t match any of the criteria that would make it unable to be recommended by Meta.

Whether a creator relies on a platform for income or uses it to educate the public, shadowbanning can be devastating for the growth of an account. And this practice often seems to disproportionately affect people who are talking about ‘taboo’ topics like sex, abortion, and LGBTQ+ identities, such as Kim Adamski, a sexual health educator who shared her story with our Stop Censoring Abortion project. As you can see in the images below, Kim’s Instagram account does not show up as a suggestion when being searched, and can only be found after typing in the full username.


Earlier this year, the Center for Intimacy Justice shared their report, "The Digital Gag: Suppression of Sexual and Reproductive Health on Meta, TikTok, Amazon, and Google", which found that of the 159 nonprofits, content creators, sex educators, and businesses surveyed, 63% had content removed on Meta platforms and 55% had content removed on TikTok. This suppression is happening at the same time as platforms continue to allow and elevate videos of violence and gore and extremist hateful content. This pattern is troubling and is only becoming more prevalent as people turn to social media to find the information they need to make decisions about their health.

Reproductive rights and sex education have been under attack across the U.S. for decades. Since the Dobbs v. Jackson decision in 2022, 20 states have banned or limited access to abortion. Meanwhile, 16 states don’t require sex education in public schools to be medically accurate, 19 states have laws that stigmatize LGBTQ+ identities in their sex education curricula, and 17 states specifically stigmatize abortion in their sex education curricula.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge.

Online platforms are critical lifelines for people seeking possibly life-saving information about their sexual and reproductive health. We know that when people are unable to find or access the information they need within their communities, they will turn to the internet and social media. This is especially important for abortion-seekers and trans youth living in states where healthcare is being criminalized.

In a world that is constantly finding ways to legislate away bodily autonomy and hide queer identities, social media platforms have an opportunity to stand as safe havens for access to community and knowledge. Limiting access to this information by suppressing the people and organizations who are providing it is an attack on free expression and a profound threat to freedom of information—principles that these platforms claim to uphold. Now more than ever, we must continue to push back against censorship of sexual and reproductive health information so that the internet can still be a place where all voices are heard and where all can learn.

This is the fourth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion

  •  

Companies Must Provide Accurate and Transparent Information to Users When Posts are Removed

This is the third installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

Imagine sharing information about reproductive health care on social media and receiving a message that your content has been removed for violating a policy intended to curb online extremism. That’s exactly what happened to one person using Instagram who shared her story with our Stop Censoring Abortion project.

Meta’s rules for “Dangerous Organizations and Individuals” (DOI) were supposed to be narrow: a way to prevent the platform from being used by terrorist groups, organized crime, and those engaged in violent or criminal activity. But over the years, we’ve seen these rules applied in far broader—and more troubling—ways, with little transparency and significant impact on marginalized voices.

EFF has long warned that the DOI policy is opaque, inconsistently enforced, and prone to overreach. The policy has been critiqued by others for its opacity and propensity to disproportionately censor marginalized groups.

a screenshot showing the user's post being flagged under Meta's DOI policy

Samantha Shoemaker's post about Plan C was flagged under Meta's policy on dangerous organizations and individuals

Meta has since added examples and clarifications in its Transparency Center to this and other policies, but their implementation still leaves users in the dark about what’s allowed and what isn’t.

The case we received illustrates just how harmful this lack of clarity can be. Samantha Shoemaker, an individual sharing information about abortion care, shared straightforward, facts about accessing abortion pills. Her posts included:

  • A video linking to Plan C’s website, which lists organizations that provide abortion pills in different states.

  • A reshared image from Plan C’s own Instagram account encouraging people to learn about advance provision of abortion pills.

  • A short clip of women talking about their experiences taking abortion pills.

Information Provided to Users Must Be Accurate

Instead of allowing her to facilitate informed discussion, Instagram flagged some of her posts under its “Prescription Drugs” policy, while others were removed under the DOI policy—the same set of rules meant to stop violent extremism from being shared.

We recognize that moderation systems—both human and automated—will make mistakes. But when Meta equates medically accurate, harm-reducing information about abortion with “dangerous organizations,” it underscores a deeper problem: the blunt tools of content moderation disproportionately silence speech that is lawful, important, and often life-saving.

At a time when access to abortion information is already under political attack in the United States and around the world, platforms must be especially careful not to compound the harm. This incident shows how overly broad rules and opaque enforcement can erase valuable speech and disempower users who most need access to knowledge.

And when content does violate the rules, it’s important that users are provided with accurate information as to why. An individual sharing information about health care will undoubtedly be confused or upset by being told that they have violated a policy meant to curb violent extremism. Moderating content responsibly means offering the greatest transparency and clarity to users as possible. As outlined in the Santa Clara Principles on Transparency and Accountability in Content Moderation, users should be able to readily understand:

  • What types of content are prohibited by the company and will be removed, with detailed guidance and examples of permissible and impermissible content;
  • What types of content the company will take action against other than removal, such as algorithmic downranking, with detailed guidance and examples on each type of content and action; and
  • The circumstances under which the company will suspend a user’s account, whether permanently or temporarily.

What You Can Do if Your Content is Removed

If you find your content removed under Meta’s policies, you do have options:

  • Appeal the decision: Every takedown notice should give you the option to appeal within the app. Appeals are sometimes reviewed by a human moderator rather than an automated system.
  • Request Oversight Board review: In certain cases, you can escalate to Meta’s independent Oversight Board, which has the power to overturn takedowns and set policy precedents.
  • Document your case: Save screenshots of takedown notices, appeals, and your original post. This documentation is essential if you want to report the issue to advocacy groups or in future proceedings.
  • Share your story: Projects like Stop Censoring Abortion collect cases of unjust takedowns to build pressure for change. Speaking out, whether to EFF and other advocacy groups or to the media, helps illustrate how policies harm real people.

Abortion is health care. Sharing information about it is not dangerous—it’s necessary. Meta should allow users to share vital information about reproductive care. The company must also ensure that users are provided with clear information about how their policies are being applied and how to appeal seemingly wrongful decisions.

This is the third post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion   

  •  

When Knowing Someone at Meta Is the Only Way to Break Out of “Content Jail”

This is the second instalment in a ten-part blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

During our Stop Censoring Abortion campaign, we set out to collect and spotlight the growing number of stories from people and organizations that have had abortion-related content removed, suppressed, or flagged by dominant social media platforms. Our survey submissions have revealed some alarming trends, including: if you don’t have a personal or second-degree connection at Meta, your chances of restoring your content or account are likely to drop significantly. 

Through the survey, we heard from activists, clinics, and researchers whose accounts were suspended or permanently removed for allegedly violating Meta’s policies on promoting or selling “restricted goods,” even when their posts were purely educational or informational. What the submissions also showed is a pattern of overenforcement, lack of transparency, and arbitrary moderation decisions that have specifically affected reproductive health and reproductive justice advocates. 

When accounts are taken down, appeals can take days, weeks, or even months (if they're even resolved at all, or if users are even given the option to appeal). For organizations and providers, this means losing access to vital communication tools and being cut off from the communities they serve. This is highly damaging since so much of that interaction happens on Meta’s platforms. Yet we saw a disturbing pattern emerge in our survey: on several occasions, accounts are swiftly restored once someone with a connection to Meta intervenes.

The Case Studies: An Abortion Clinic

The Red River Women's Clinic is an abortion clinic in Moorhead, MN. It was originally located in Fargo, North Dakota, and for many years was the only abortion clinic in North Dakota. In early January, the clinic’s director heard from a patient that she thought they only offered procedural/surgical abortions and not medication abortion. To clarify for other patients, they posted on the clinic’s page that they offered both procedural and medication abortions—attaching an image of a box of mifepristone. When they tried to boost the post, the ad was flagged and their account was suspended.

They appealed the decision and initially got the ad approved, yet the page was suspended again shortly after. But this time, multiple appeals and direct emails went unanswered, until they reached out to a digital rights organization that was able to connect with staff at Meta that stepped in. Only then was their page restored, with Meta noting that their post did not violate the policies but warning that future violations could lead to permanent removal.

While this may have been a glitch in Meta’s systems or a misapplication of policy, the suspension of the clinic’s Facebook account was detrimental for them. “We were unable to update our followers about dates/times we were closed, we were unable to share important information and news about abortion that would have kept our followers up to date, there was a legislative session happening and we were unable to share events and timely asks for reaching out to legislators about issues,” shared Tammi Kromenaker, Director of Red River Women's Clinic. The clinic was also prevented from starting an Instagram page due to the suspension. “Facebook has a certain audience and Instagram has another audience,” said Kromenaker, “we are trying to cater to all of our supporters so the loss of FB and the inability to access and start an Instagram account were really troubling to us.” 

The Case Studies: RISE at Emory University

RISE, a reproductive health research center at Emory University, launched an Instagram account to share community-centered research and combat misinformation related to reproductive health. In January of this year, they posted educational content about mifepristone on their instagram. “Let's talk about Mifepristone + its uses + the importance of access”, read the post. Two months later, their account was suddenly suspended, flagging the account under its policy against selling illegal drugs. Their appeal was denied, which led to the account being permanently deleted. 

A screenshot of an instagram post from @emory.rise that reads "let's talk about mifepristone" in bold black font "+ its uses + the importance of access" in blue

Screenshot submitted by RISE to EFF

“As a team, this was a hit to our morale” shared Sara Redd, Director of Research Translation at RISE. “We pour countless hours of person-power, creativity, and passion into creating the content we have on our page, and having it vanish virtually overnight took a toll on our team.” For many organizational users like RISE, their social media accounts are a repository for resources and metrics that may not be stored elsewhere. “We spent a significant amount of already-constrained team capacity attempting to recover all of the content we’d created for Instagram that was potentially going to be permanently lost. [...] We also spent a significant amount of time and energy trying to understand what options we might have available from Meta to appeal our case and/or recover our account; their support options are not easily accessible, and the time it took to navigate this issue distracted from our existing work.”  

Meta restored the account only after RISE was able to connect with someone there. Once RISE logged back in, they confirmed that the flagged post was the one about mifepristone. The post never sold or directed people where to buy pills, it simply provided accurate information about the use and efficacy of the drug. 

This Shouldn’t Be How Content Moderation Works

Meta spokespersons have admitted to instances of “overenforcement” in various press statements, noting that content is sometimes incorrectly removed or blurred even when it doesn’t actually violate policy. Meta has insisted to the public that they care about free speech, as a spokesperson mentioned to The New York Times: “We want our platforms to be a place where people can access reliable information about health services, advertisers can promote health services and everyone can discuss and debate public policies in this space [...] That’s why we allow posts and ads about, discussing and debating abortion.” In fact, their platform policies directly mention this

Note that advertisers don’t need authorization to run ads that only:

  • Educate, advocate or give public service announcements related to prescription drugs

Additionally

Note: Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements. 

Meta also has policies specific to “Health and Wellness,” where they state: 

When targeting people 18 years or older, advertisers can run ads that:

  • Promote sexual and reproductive health and wellness products or services, as long as the focus is on health and the medical efficacy of the product or the service and not on the sexual pleasure or enhancement. And these ads must target people 18 years or older. This includes ads for: [...]
  • Family planning methods, such as:
    • Family planning clinics
    • In Vitro Fertilization (IVF) or any other artificial insemination procedures
    • Fertility awareness
    • Abortion medical consultation and related services

But these public commitments don’t always match users’ experiences. 

Take the widely covered case of Aid Access, a group that provides medication abortion by mail. This year, several of their Instagram posts were blurred and removed on Instagram, including one with tips for feeling safe and supported at home after taking abortion medication. But only after multiple national media outlets contacted Meta for comment on the story were the posts and account restored.

So the question becomes: If Meta admits its enforcement isn’t perfect, why does it still take knowing someone, or having the media involved, to get a fair review? When companies like Meta claim to uphold commitments to free speech, those commitments should materialize in clear policies that are enforced equally, not only when it is escalated through leveraging relationships with Meta personnel.

“Facebook Jail” Reform

There is no question that the enforcement of these content moderation policies on Meta platforms and the length of time people are spending in “content jail” or “Facebook/Instagram jail” has created a chilling effect

“I think that I am more cautious and aware that the 6.1K followers we have built up over time could be taken away at any time based on the whims of Meta,” Tammi from Red River Women’s Clinic told us. 

RISE sees it in a slightly different light, sharing that “[w]hile this experience has not affected our fundamental values and commitment to sharing our work and rigorous science, it has highlighted for us that no information posted on a third-party platform is entirely one’s own, and thus can be dismantled at any moment.”

At the end of the day, clinics are left afraid to post basic information, patients are left confused or misinformed, and researchers lose access to their audiences. But unless your issue catches the attention of a journalist or you know someone at Meta, you might never regain access to your account.

These case studies highlight the urgent need for transparent, equitable, and timely enforcement that is not dependent on insider connections, as well as accountability from platforms that claim to support open dialogue and free speech. Meta’s admitted overenforcement should, at minimum, be coupled with efficient and well-staffed review processes and policies that are transparent and easily understandable. 

It’s time for Meta and other social media platforms to implement the reforms they claim to support, and for them to prove that protecting access to vital health information doesn’t hinge on who you know.

This is the second post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion   

  •  

Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis

This is the first installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here. 

We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts. 

For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including Plan C, Women on Web, Reproaction, and Women First Digital—we launched the #StopCensoringAbortion campaign to collect and amplify these stories.  

Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms. 

We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that almost none of the submissions we received violated any of the platforms’ stated policies. Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example: 

Screenshot of removed post submitted by Lauren Kahre to EFF

Screenshot submitted by Lauren Kahre to EFF

In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely.  

Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: Meta has publicly insisted that posts like these should not be censored. In a February 2024 letter to Amnesty International, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.” 

Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.” 

Screenshot of takedown notice submitted by Lauren Kahre to EFF

Screenshot submitted by Lauren Kahre to EFF

In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated policies on Restricted Goods and Services, which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.”  

Yet in Lauren’s case and others, the posts very clearly did no such thing. And as Meta itself has explained: “Providing guidance on how to legally access pharmaceuticals is permitted as it is not considered an offer to buy, sell or trade these drugs.” 

In fact, Meta’s policies on Restricted Goods & Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.” 

Over and over again, the policies say one thing, but the actual enforcement says another. 

We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better.  

In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with Barbara Ortutay at The Associated Press, whose report on some of these takedowns was published today 

We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner.  

With reproductive rights under attack both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information. 

This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion    

  •  

Fake Clinics Quietly Edit Their Websites After Being Called Out on HIPAA Claims

In a promising sign that public pressure works, several crisis pregnancy centers (CPCs, also known as “fake clinics”) have quietly scrubbed misleading language about privacy protections from their websites. 

Earlier this year, EFF sent complaints to attorneys general in eight states (FL, TX, AR, and MO, TN, OK, NE, and NC), asking them to investigate these centers for misleading the public with false claims about their privacy practices—specifically, falsely stating or implying that they are bound by the Health Insurance Portability and Accountability Act (HIPAA). These claims are especially deceptive because many of these centers are not licensed medical clinics or do not have any medical providers on staff, and thus are not subject to HIPAA’s protections.

Now, after an internal follow-up investigation, we’ve found that our efforts are already bearing fruit: Of the 21 CPCs we cited as exhibits in our complaints, six have completely removed HIPAA references from their websites, and one has made partial changes (removed one of two misleading claims). Notably, every center we flagged in our letters to Texas AG Ken Paxton and Arkansas AG Tim Griffin has updated its website—a clear sign that clinics in these states are responding to scrutiny.

While 14 remain unchanged, this is a promising development. These centers are clearly paying attention—and changing their messaging. We haven’t yet received substantive responses from the state attorneys general beyond formal acknowledgements of our complaints, but these early results confirm what we’ve long believed: transparency and public pressure work.

These changes (often quiet edits to privacy policies on their websites or deleting blog posts) signal that the CPC network is trying to clean up their public-facing language in the wake of scrutiny. But removing HIPAA references from a website doesn’t mean the underlying privacy issues have been fixed. Most CPCs are still not subject to HIPAA, because they are not licensed healthcare providers. They continue to collect sensitive information without clearly disclosing how it’s stored, used, or shared. And in the absence of strong federal privacy laws, there is little recourse for people whose data is misused. 

These clinics have misled patients who are often navigating complex and emotional decisions about their health, misrepresented themselves as bound by federal privacy law, and falsely referred people to the U.S. Department of Health and Human Services for redress—implying legal oversight and accountability. They made patients believe their sensitive data was protected, when in many cases, it was shared with affiliated networks, or even put on the internet for anyone to see—including churches or political organizations.

That’s why we continue to monitor these centers—and call on state attorneys general to do the same. 

  •