Normal view

Received today — 14 February 2026

Given the toxicity of social media, a moral question now faces all of us: is it still ethical to use it? | Frances Ryan

14 February 2026 at 01:00

With so many platforms rife with racism, misogyny and far-right rhetoric, there must be a point where decent people walk away

In a week during which Keir Starmer has been under pressure to resign, cabinet ministers took to X to show their support. “We’ve all been made to tweet,” one Labour figure told a political journalist. The irony is hard to escape: as the prime minister is embroiled in the scandal of Peter Mandelson’s relationship with Jeffrey Epstein, and now his former aide’s links to a sex offender, MPs are defending him on a platform that has in the past month allowed users to create sexualised images of women and girls.

This says something about the unprecedented way in which X has been tied to modern politics since it was still known as Twitter, as well as how widespread the culture of indifference is to the violation of female bodies, both online and off. But it also points to a growing dilemma facing not just politicians, but all of us: is it possible to post ethically on social media any more? And when is it time to log off?

Frances Ryan is a Guardian columnist

Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.

Continue reading...

© Photograph: Peter Dazeley/Getty Images

© Photograph: Peter Dazeley/Getty Images

© Photograph: Peter Dazeley/Getty Images

Received before yesterday

Dr. Oz Says Drinking Is a ‘Social Lubricant.’ Some Experts Worry About That.

10 February 2026 at 17:56
“Most of the harm that comes from alcohol,” said one researcher, is “due mostly or mainly to drinking with their buddies.”

© Edu Bayer for The New York Times

Most research examining the effects of alcohol in a controlled laboratory setting has ignored the social context in which most drinking occurs.

War Came to Ukraine and Its Dogs Are Not the Same

8 February 2026 at 05:01
Researchers discovered surprising changes to former pets along the front line of combat with Russia.

© Tetiana Dzhafarova/Agence France-Presse — Getty Images

A dog walked past damaged houses in the city of Svyatohirsk, Donetsk, last summer.

To reuse or not reuse—the eternal debate of New Glenn's second stage reignites

6 February 2026 at 14:31

Engineers at Blue Origin have been grappling with a seemingly eternal debate that involves the New Glenn rocket and the economics of flying it.

The debate goes back at least 15 years, to the early discussions around the design of the heavy lift rocket. The first stage, of course, would be fully reusable. But what about the upper stage of New Glenn, powered by two large BE-3U engines?

Around the same time, in the early 2010s, SpaceX was also trading the economics of reusing the second stage of its Falcon 9 rocket. Eventually SpaceX founder Elon Musk abandoned his goal of a fully reusable Falcon 9, choosing instead to recover payload fairings and push down manufacturing costs of the upper stage as much as possible. This strategy worked, as SpaceX has lowered its internal launch costs of a Falcon 9, even with a new second stage, to about $15 million. The company is now focused on making the larger Starship rocket fully reusable.

Read full article

Comments

© Blue Origin

Kingston Police arrest female unable to care for herself or child

6 February 2026 at 12:51
A 26-year-old woman from Windsor has been arrested and her child has been taken to Family and Children’s Services of Frontenac, Lennox and Addington following an interaction with Kingston Police. Read More

For Men, How Much Alcohol Is Too Much?

16 January 2026 at 05:01
Federal officials working on the new dietary guidelines had considered limiting men to one drink daily. The final advice was only that everyone should drink less.

© Robert Wright for The New York Times

“There are a lot of reasons people drink alcohol,” said one epidemiologist who led an advisory panel on alcohol. “What we’re saying is health shouldn’t be one of them.”

Artificial Intelligence, Copyright, and the Fight for User Rights: 2025 in Review

25 December 2025 at 15:07

A tidal wave of copyright lawsuits against AI developers threatens beneficial uses of AI, like creative expression, legal research, and scientific advancement. How courts decide these cases will profoundly shape the future of this technology, including its capabilities, its costs, and whether its evolution will be shaped by the democratizing forces of the open market or the whims of an oligopoly. As these cases finished their trials and moved to appeals courts in 2025, EFF intervened to defend fair use, promote competition, and protect everyone’s rights to build and benefit from this technology.

At the same time, rightsholders stepped up their efforts to control fair uses through everything from state AI laws to technical standards that influence how the web functions. In 2025, EFF fought policies that threaten the open web in the California State Legislature, the Internet Engineering Task Force, and beyond.

Fair Use Still Protects Learning—Even by Machines

Copyright lawsuits against AI developers often follow a similar pattern: plaintiffs argue that use of their works to train the models was infringement and then developers counter that their training is fair use. While legal theories vary, the core issue in many of these cases is whether using copyrighted works to train AI is a fair use.

We think that it is. Courts have long recognized that copying works for analysis, indexing, or search is a classic fair use. That principle doesn’t change because a statistical model is doing the reading. AI training is a legitimate, transformative fair use, not a substitute for the original works.

More importantly, expanding copyright would do more harm than good: while creators have legitimate concerns about AI, expanding copyright won’t protect jobs from automation. But overbroad licensing requirements risk entrenching Big Tech’s dominance, shutting out small developers, and undermining fair use protections for researchers and artists. Copyright is a tool that gives the most powerful companies even more control—not a check on Big Tech. And attacking the models and their outputs by attacking training—i.e. “learning” from existing works—is a dangerous move. It risks a core principle of freedom of expression: that training and learning—by anyone—should not be endangered by restrictive rightsholders.

In most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies, but in 2025, things began to speed up.

But some cases have already reached courts of appeal. We advocated for fair use rights and sensible limits on copyright in amicus briefs filed in Doe v. GitHub, Thomson Reuters v. Ross Intelligence, and Bartz v. Anthropic, three early AI copyright appeals that could shape copyright law and influence dozens of other cases. We also filed an amicus brief in Kadrey v. Meta, one of the first decisions on the merits of the fair use defense in an AI copyright case.

How the courts decide the fair use questions in these cases could profoundly shape the future of AI—and whether legacy gatekeepers will have the power to control it. As these cases move forward, EFF will continue to defend your fair use rights.

Protecting the Open Web in the IETF

Rightsholders also tried to make an end-run around fair use by changing the technical standards that shape much of the internet. The IETF, an Internet standards body, has been developing technical standards that pose a major threat to the open web. These proposals would give websites to express “preference signals” against certain uses of scraped data—effectively giving them veto power over fair uses like AI training and web search.

Overly restrictive preference signaling threatens a wide range of important uses—from accessibility tools for people with disabilities to research efforts aimed at holding governments accountable. Worse, the IETF is dominated by publishers and tech companies seeking to embed their business models into the infrastructure of the internet. These companies aren’t looking out for the billions of internet users who rely on the open web.

That’s where EFF comes in. We advocated for users’ interests in the IETF, and helped defeat the most dangerous aspects of these proposals—at least for now.

Looking Ahead

The AI copyright battles of 2025 were never just about compensation—they were about control. EFF will continue working in courts, legislatures, and standards bodies to protect creativity and innovation from copyright maximalists.

Online Gaming’s Final Boss: The Copyright Bully

19 December 2025 at 13:14

Since earliest days of computer games, people have tinkered with the software to customize their own experiences or share their vision with others. From the dad who changed the game’s male protagonist to a girl so his daughter could see herself in it, to the developers who got their start in modding, games have been a medium where you don’t just consume a product, you participate and interact with culture.

For decades, that participatory experience was a key part of one of the longest-running video games still in operation: Everquest. Players had the official client, acquired lawfully from EverQuest’s developers, and modders figured out how to enable those clients to communicate with their own servers and then modify their play experience – creating new communities along the way.

Everquest’s copyright owners implicitly blessed all this. But the current owners, a private equity firm called Daybreak, want to end that independent creativity. They are using copyright claims to threaten modders who wanted to customize the EverQuest experience to suit a different playstyle, running their own servers where things worked the way they wanted. 

One project in particular is in Daybreak’s crosshairs: “The Hero’s Journey” (THJ). Daybreak claims THJ has infringed its copyrights in Everquest visuals and character, cutting into its bottom line.

Ordinarily, when a company wants to remedy some actual harm, its lawyers will start with a cease-and-desist letter and potentially pursue a settlement. But if the goal is intimidation, a rightsholder is free to go directly to federal court and file a complaint. That’s exactly what Daybreak did, using that shock-and-awe approach to cow not only The Hero’s Journey team, but unrelated modders as well.

Daybreak’s complaint seems to have dazzled the judge in the case by presenting side-by-side images of dragons and characters that look identical in the base game and when using the mod, without explaining that these images are the ones provided by EverQuest’s official client, which players have lawfully downloaded from the official source. The judge wound up short-cutting the copyright analysis and issuing a ruling that has proven devastating to the thousands of players who are part of EverQuest modding communities.

Daybreak and the developers of The Hero’s Journey are now in private arbitration, and Daybreak has wasted no time in sending that initial ruling to other modders. The order doesn’t bind anyone who’s unaffiliated with The Hero’s Journey, but it’s understandable that modders who are in it for fun and community would cave to the implied threat that they could be next.

As a result, dozens of fan servers have stopped operating. Daybreak has also persuaded the maintainers of the shared server emulation software that most fan servers rely upon, EQEmulator, to adopt terms of service that essentially ban any but the most negligible modding. The terms also provide that “your operation of an EQEmulator server is subject to Daybreak’s permission, which it may revoke for any reason or no reason at any time, without any liability to you or any other person or entity. You agree to fully and immediately comply with any demand from Daybreak to modify, restrict, or shut down any EQEmulator server.” 

This is sadly not even an uncommon story in fanspaces—from the dustup over changes to the Dungeons and Dragons open gaming license to the “guidelines” issued by CBS for Star Trek fan films, we see new generations of owners deciding to alienate their most avid fans in exchange for more control over their new property. It often seems counterintuitive—fans are creating new experiences, for free, that encourage others to get interested in the original work.

Daybreak can claim a shameful victory: it has imposed unilateral terms on the modding community that are far more restrictive than what fair use and other user rights would allow. In the process, it is alienating the very people it should want to cultivate as customers: hardcore Everquest fans. If it wants fans to continue to invest in making its games appeal to broader audiences and serve as testbeds for game development and sources of goodwill, it needs to give the game’s fans room to breathe and to play.

If you’ve been a target of Daybreak’s legal bullying, we’d love to hear from you; email us at info@eff.org.

Fair Use is a Right. Ignoring It Has Consequences.

18 December 2025 at 15:54

Fair use is not just an excuse to copy—it’s a pillar of online speech protection, and disregarding it in order to lash out at a critic should have serious consequences. That’s what we told a federal court in Channel 781 News v. Waltham Community Access Corporation, our case fighting copyright abuse on behalf of citizen journalists.

Waltham Community Access Corporation (WCAC), a public access cable station in Waltham, Massachusetts, records city council meetings on video. Channel 781 News (Channel 781), a group of volunteers who report on the city council, curates clips from those recordings for its YouTube channel, along with original programming, to spark debate on issues like housing and transportation. WCAC sent a series of takedown notices under the Digital Millennium Copyright Act (DMCA), accusing Channel 781 of copyright infringement. That led to YouTube deactivating Channel 781’s channel just days before a critical municipal election. Represented by EFF and the law firm Brown Rudnick LLP, Channel 781 sued WCAC for misrepresentations in its takedown notices under an important but underutilized provision of the DMCA.

The DMCA gives copyright holders a powerful tool to take down other people’s content from platforms like YouTube. The “notice and takedown” process requires only an email, or filling out a web form, in order to accuse another user of copyright infringement and have their content taken down. And multiple notices typically lead to the target’s account being suspended, because doing so helps the platform avoid liability. There’s no court or referee involved, so anyone can bring an accusation and get a nearly instantaneous takedown.

Of course, that power invites abuse. Because filing a DMCA infringement notice is so easy, there’s a temptation to use it at the drop of a hat to take down speech that someone doesn’t like. To prevent that, before sending a takedown notice, a copyright holder has to consider whether the use they’re complaining about is a fair use. Specifically, the copyright holder needs to form a “good faith belief” that the use is not “authorized by the law,” such as through fair use.

WCAC didn’t do that. They didn’t like Channel 781 posting short clips from city council meetings recorded by WCAC as a way of educating Waltham voters about their elected officials. So WCAC fired off DMCA takedown notices at many of Channel 781’s clips that were posted on YouTube.

WCAC claims they considered fair use, because a staff member watched a video about it and discussed it internally. But WCAC ignored three of the four fair use factors. WCAC ignored that their videos had no creativity, being nothing more than records of public meetings. They ignored that the clips were short, generally including one or two officials’ comments on a single issue. They ignored that the clips caused WCAC no monetary or other harm, beyond wounded pride. And they ignored facts they already knew, and that are central to the remaining fair use factor: by excerpting and posting the clips with new titles, Channel 781 was putting its own “spin” on the material - in other words, transforming it. All of these facts support fair use.

Instead, WCAC focused only on the fact that the clips they targeted were not altered further or put into a larger program. Looking at just that one aspect of fair use isn’t enough, and changing the fair use inquiry to reach the result they wanted is hardly the way to reach a “good faith belief.”

That’s why we’re asking the court to rule that WCAC’s conduct violated the law and that they should pay damages. Copyright holders need to use the powerful DMCA takedown process with care, and when they don’t, there needs to be consequences.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

Google Sues to Disrupt Chinese SMS Phishing Triad

13 November 2025 at 09:47

Google is suing more than two dozen unnamed individuals allegedly involved in peddling a popular China-based mobile phishing service that helps scammers impersonate hundreds of trusted brands, blast out text message lures, and convert phished payment card data into mobile wallets from Apple and Google.

In a lawsuit filed in the Southern District of New York on November 12, Google sued to unmask and disrupt 25 “John Doe” defendants allegedly linked to the sale of Lighthouse, a sophisticated phishing kit that makes it simple for even novices to steal payment card data from mobile users. Google said Lighthouse has harmed more than a million victims across 120 countries.

A component of the Chinese phishing kit Lighthouse made to target customers of The Toll Roads, which refers to several state routes through Orange County, Calif.

Lighthouse is one of several prolific phishing-as-a-service operations known as the “Smishing Triad,” and collectively they are responsible for sending millions of text messages that spoof the U.S. Postal Service to supposedly collect some outstanding delivery fee, or that pretend to be a local toll road operator warning of a delinquent toll fee. More recently, Lighthouse has been used to spoof e-commerce websites, financial institutions and brokerage firms.

Regardless of the text message lure or brand used, the basic scam remains the same: After the visitor enters their payment information, the phishing site will automatically attempt to enroll the card as a mobile wallet from Apple or Google. The phishing site then tells the visitor that their bank is going to verify the transaction by sending a one-time code that needs to be entered into the payment page before the transaction can be completed.

If the recipient provides that one-time code, the scammers can link the victim’s card data to a mobile wallet on a device that they control. Researchers say the fraudsters usually load several stolen wallets onto each mobile device, and wait 7-10 days after that enrollment before selling the phones or using them for fraud.

Google called the scale of the Lighthouse phishing attacks “staggering.” A May 2025 report from Silent Push found the domains used by the Smishing Triad are rotated frequently, with approximately 25,000 phishing domains active during any 8-day period.

Google’s lawsuit alleges the purveyors of Lighthouse violated the company’s trademarks by including Google’s logos on countless phishing websites. The complaint says Lighthouse offers over 600 templates for phishing websites of more than 400 entities, and that Google’s logos were featured on at least a quarter of those templates.

Google is also pursuing Lighthouse under the Racketeer Influenced and Corrupt Organizations (RICO) Act, saying the Lighthouse phishing enterprise encompasses several connected threat actor groups that work together to design and implement complex criminal schemes targeting the general public.

According to Google, those threat actor teams include a “developer group” that supplies the phishing software and templates; a “data broker group” that provides a list of targets; a “spammer group” that provides the tools to send fraudulent text messages in volume; a “theft group,” in charge of monetizing the phished information; and an “administrative group,” which runs their Telegram support channels and discussion groups designed to facilitate collaboration and recruit new members.

“While different members of the Enterprise may play different roles in the Schemes, they all collaborate to execute phishing attacks that rely on the Lighthouse software,” Google’s complaint alleges. “None of the Enterprise’s Schemes can generate revenue without collaboration and cooperation among the members of the Enterprise. All of the threat actor groups are connected to one another through historical and current business ties, including through their use of Lighthouse and the online community supporting its use, which exists on both YouTube and Telegram channels.”

Silent Push’s May report observed that the Smishing Triad boasts it has “300+ front desk staff worldwide” involved in Lighthouse, staff that is mainly used to support various aspects of the group’s fraud and cash-out schemes.

An image shared by an SMS phishing group shows a panel of mobile phones responsible for mass-sending phishing messages. These panels require a live operator because the one-time codes being shared by phishing victims must be used quickly as they generally expire within a few minutes.

Google alleges that in addition to blasting out text messages spoofing known brands, Lighthouse makes it easy for customers to mass-create fake e-commerce websites that are advertised using Google Ads accounts (and paid for with stolen credit cards). These phony merchants collect payment card information at checkout, and then prompt the customer to expect and share a one-time code sent from their financial institution.

Once again, that one-time code is being sent by the bank because the fake e-commerce site has just attempted to enroll the victim’s payment card data in a mobile wallet. By the time a victim understands they will likely never receive the item they just purchased from the fake e-commerce shop, the scammers have already run through hundreds of dollars in fraudulent charges, often at high-end electronics stores or jewelers.

Ford Merrill works in security research at SecAlliance, a CSIS Security Group company, and he’s been tracking Chinese SMS phishing groups for several years. Merrill said many Lighthouse customers are now using the phishing kit to erect fake e-commerce websites that are advertised on Google and Meta platforms.

“You find this shop by searching for a particular product online or whatever, and you think you’re getting a good deal,” Merrill said. “But of course you never receive the product, and they will phish that one-time code at checkout.”

Merrill said some of the phishing templates include payment buttons for services like PayPal, and that victims who choose to pay through PayPal can also see their PayPal accounts hijacked.

A fake e-commerce site from the Smishing Triad spoofing PayPal on a mobile device.

“The main advantage of the fake e-commerce site is that it doesn’t require them to send out message lures,” Merrill said, noting that the fake vendor sites have more staying power than traditional phishing sites because it takes far longer for them to be flagged for fraud.

Merrill said Google’s legal action may temporarily disrupt the Lighthouse operators, and could make it easier for U.S. federal authorities to bring criminal charges against the group. But he said the Chinese mobile phishing market is so lucrative right now that it’s difficult to imagine a popular phishing service voluntarily turning out the lights.

Merrill said Google’s lawsuit also can help lay the groundwork for future disruptive actions against Lighthouse and other phishing-as-a-service entities that are operating almost entirely on Chinese networks. According to Silent Push, a majority of the phishing sites created with these kits are sitting at two Chinese hosting companies: Tencent (AS132203) and Alibaba (AS45102).

“Once Google has a default judgment against the Lighthouse guys in court, theoretically they could use that to go to Alibaba and Tencent and say, ‘These guys have been found guilty, here are their domains and IP addresses, we want you to shut these down or we’ll include you in the case.'”

If Google can bring that kind of legal pressure consistently over time, Merrill said, they might succeed in increasing costs for the phishers and more frequently disrupting their operations.

“If you take all of these Chinese phishing kit developers, I have to believe it’s tens of thousands of Chinese-speaking people involved,” he said. “The Lighthouse guys will probably burn down their Telegram channels and disappear for a while. They might call it something else or redevelop their service entirely. But I don’t believe for a minute they’re going to close up shop and leave forever.”

1 million victims, 17,500 fake sites: Google takes on toll-fee scammers

13 November 2025 at 09:43

A Phishing-as-a-Service (PhaaS) platform based in China, known as “Lighthouse,” is the subject of a new Google lawsuit.

Lighthouse enables smishing (SMS phishing) campaigns, and if you’re in the US there is a good chance you’ve seen their texts about a small amount you supposedly owe in toll fees. Here’s an example of a toll-fee scam text:

Google’s lawsuit brings claims against the Lighthouse platform under federal racketeering and fraud statutes, including the Racketeer Influenced and Corrupt Organizations Act (RICO), the Lanham Act, and the Computer Fraud and Abuse Act.

The texts lure targets to websites that impersonate toll authorities or other trusted organizations. The goal is to steal personal information and credit card numbers for use in further financial fraud.

As we reported in October 2025, Project Red Hook launched to combine the power of the US Homeland Security Investigations (HSI), law enforcement partners, and businesses to raise awareness of how Chinese organized crime groups use gift cards to launder money.

These toll, postage, and refund scams might look different on the surface, but they all feed the same machine, each one crafted to look like an urgent government or service message demanding a small fee. Together, they form an industrialized text-scam ecosystem that’s earned Chinese crime groups more than $1 billion in just three years.

Google says Lighthouse alone affected more than 1 million victims across 120 countries. A September report by Netcraft discussed two phishing campaigns believed to be associated with Lighthouse and “Lucid,” a very similar PhaaS platform. Since identifying these campaigns, Netcraft has detected more than 17,500 phishing domains targeting 316 brands from 74 countries.

As grounds for the lawsuit, Google says it found at least 107 phishing website templates that feature its own branding to boost credibility. But a lawsuit can only go so far, and Google says robust public policy is needed to address the broader threat of scams:

“We are collaborating with policymakers and are today announcing our endorsement of key bipartisan bills in the U.S. Congress.”

Will lawsuits, disruptions, and even bills make toll-fee scams go away? Not very likely. The only thing that will really help is if their source of income dries up because people stop falling for smishing. Education is the biggest lever.

Red flags in smishing messages

There are some tell-tale signs in these scams to look for:

  1. Spelling and grammar mistakes: the scammers seem to have problems with formatting dates. For example “September 10nd”, “9st” (instead of 9th or 1st).
  2. Urgency: you only have one or two days to pay. Or else…
  3. The over-the-top threats: Real agencies won’t say your “credit score will be affected” for an unpaid traffic violation.
  4. Made-up legal codes: “Ohio Administrative Code 15C-16.003” doesn’t match any real Ohio BMV administrative codes. When a code looks fake, it probably is!
  5. Sketchy payment link: Truly trusted organizations don’t send urgent “pay now or else” links by text.
  6. Vague or missing personalization: Genuine government agencies tend to use your legal name, not a generic scare message sent to many people at the same time.

Be alert to scams

Recognizing scams is the most important part of protecting yourself, so always consider these golden rules:

  • Always search phone numbers and email addresses to look for associations with known scams.
  • When in doubt, go directly to the website of the organization that contacted you to see if there are any messages for you.
  • Do not get rushed into decisions without thinking them through.
  • Do not click on links in unsolicited text messages.
  • Do not reply, even if the text message explicitly tells you to do so.

If you have engaged with the scammers’ website:

  • Immediately change your passwords for any accounts that may have been compromised. 
  • Contact your bank or financial institution to report the incident and take any necessary steps to protect your accounts, such as freezing them or monitoring for suspicious activity. 
  • Consider a fraud alert or credit freeze. To start layering protection, you might want to place a fraud alert or credit freeze on your credit file with all three of the primary credit bureaus. This makes it harder for fraudsters to open new accounts in your name.
  • US citizens can report confirmed cases of identity theft to the FTC at identitytheft.gov.

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Protecting Access to the Law—and Beneficial Uses of AI

30 September 2025 at 00:26

As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.  

ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotesit simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.  

Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law. 

The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).  

The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts. 

EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law. 

This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law. 

Fair Use Protects Everyone—Even the Disney Corporation

26 September 2025 at 13:16

Jimmy Kimmel has been in the news a lot recently, which means the ongoing lawsuit against him by perennial late-night punching bag/convicted fraudster/former congressman George Santos flew under the radar. But what happened in that case is an essential illustration of the limits of both copyright law and the “fine print” terms of service on websites and apps. 

What happened was this: Kimmel and his staff saw that Santos was on Cameo, which allows people to purchase short videos from various public figures with requested language. Usually it’s something like “happy birthday” or “happy retirement.” In the case of Kimmel and his writers, they set out to see if there was anything they couldn’t get Santos to say on Cameo. For this to work, they obviously didn’t disclose that it was Jimmy Kimmel Live! asking for the videos.  

Santos did not like the segment, which aired clips of these videos, called “Will Santos Say It?”.  He sued Kimmel, ABC, and ABC’s parent company, Disney. He alleged both copyright infringement and breach of contract—the contract in this case being Cameo’s terms of service. He lost on all counts, twice: his case was dismissed at the district court level, and then that dismissal was upheld by an appeals court. 

On the copyright claim, Kimmel and Disney argued and won on the grounds of fair use. The court cited precedent that fair use excuses what might be strictly seen as infringement if such a finding would “stifle the very creativity” that copyright is meant to promote. In this case, the use of the videos was part of the ongoing commentary by Jimmy Kimmel Live! around whether there was anything Santos wouldn’t say for money. Santos tried to argue that since this was their purpose from the outset, the use wasn’t transformative. Which... isn’t how it works. Santos’ purpose was, presumably, to fulfill a request sent through the app. The show’s purpose was to collect enough examples of a behavior to show a pattern and comment on it.  

Santos tried to say that their not disclosing what the reason was invalidated the fair use argument because it was “deceptive.” But the court found that the record didn’t show that the deception was designed to replace the market for Santos’s Cameos. It bears repeating: commenting on the quality of a product or the person making it is not legally actionable interference with a business. If someone tells you that a movie, book, or, yes, Cameo isn’t worth anything because of its ubiquity or quality and shows you examples, that’s not a deceptive business practice. In fact, undercover quality checks and reviews are fairly standard practices! Is this a funnier and more entertaining example than a restaurant review? Yes. That doesn’t make it unprotected by fair use.  

It’s nice to have this case as a reminder that, despite everything, the major studios often argue, fair use protects everyone, including them. Don’t hold your breath on them remembering this the next time someone tries to make a YouTube review of a Hollywood movie using clips.  

Another claim from this case that is less obvious but just as important involves the Cameo terms of service. We often see contracts being used to restrict people’s fair use rights. Cameo offers different kinds of videos for purchase. The most well-known comes with a personal use license, the “happy birthdays,” and so on. They also offer a “commercial” use license, presumably if you want to use the videos to generate revenue, like you do with an ad or paid endorsement. However, in this case, the court found that the terms of service are a contract between a customer and Cameo, not between the customer and the video maker. Cameo’s terms of service explicitly lay out when their terms apply to the person selling a video, and they don’t create a situation where Santos can use those terms to sue Jimmy Kimmel Live! According to the court, the terms don’t even imply a shared understanding and contract between the two parties.  

It's so rare to find a situation where the wall of text that most terms of service consist of actually helps protect free expression; it’s a pleasant surprise to see it here.  

In general, we at EFF hate it when these kinds of contracts—you know the ones, where you hit accept after scrolling for ages just so you can use the app—are used to constrain users’ rights. Fair use is supposed to protect us all from overly strict interpretations of copyright law, but abusive terms of service can erode those rights. We’ll keep fighting for those rights and the people who use them, even if the one exercising fair use is Disney.  

Inside a Dark Adtech Empire Fed by Fake CAPTCHAs

12 June 2025 at 18:14

Late last year, security researchers made a startling discovery: Kremlin-backed disinformation campaigns were bypassing moderation on social media platforms by leveraging the same malicious advertising technology that powers a sprawling ecosystem of online hucksters and website hackers. A new report on the fallout from that investigation finds this dark ad tech industry is far more resilient and incestuous than previously known.

Image: Infoblox.

In November 2024, researchers at the security firm Qurium published an investigation into “Doppelganger,” a disinformation network that promotes pro-Russian narratives and infiltrates Europe’s media landscape by pushing fake news through a network of cloned websites.

Doppelganger campaigns use specialized links that bounce the visitor’s browser through a long series of domains before the fake news content is served. Qurium found Doppelganger relies on a sophisticated “domain cloaking” service, a technology that allows websites to present different content to search engines compared to what regular visitors see. The use of cloaking services helps the disinformation sites remain online longer than they otherwise would, while ensuring that only the targeted audience gets to view the intended content.

Qurium discovered that Doppelganger’s cloaking service also promoted online dating sites, and shared much of the same infrastructure with VexTrio, which is thought to be the oldest malicious traffic distribution system (TDS) in existence. While TDSs are commonly used by legitimate advertising networks to manage traffic from disparate sources and to track who or what is behind each click, VexTrio’s TDS largely manages web traffic from victims of phishing, malware, and social engineering scams.

BREAKING BAD

Digging deeper, Qurium noticed Doppelganger’s cloaking service used an Internet provider in Switzerland as the first entry point in a chain of domain redirections. They also noticed the same infrastructure hosted a pair of co-branded affiliate marketing services that were driving traffic to sketchy adult dating sites: LosPollos[.]com and TacoLoco[.]co.

The LosPollos ad network incorporates many elements and references from the hit series “Breaking Bad,” mirroring the fictional “Los Pollos Hermanos” restaurant chain that served as a money laundering operation for a violent methamphetamine cartel.

The LosPollos advertising network invokes characters and themes from the hit show Breaking Bad. The logo for LosPollos (upper left) is the image of Gustavo Fring, the fictional chicken restaurant chain owner in the show.

Affiliates who sign up with LosPollos are given JavaScript-heavy “smartlinks” that drive traffic into the VexTrio TDS, which in turn distributes the traffic among a variety of advertising partners, including dating services, sweepstakes offers, bait-and-switch mobile apps, financial scams and malware download sites.

LosPollos affiliates typically stitch these smart links into WordPress websites that have been hacked via known vulnerabilities, and those affiliates will earn a small commission each time an Internet user referred by any of their hacked sites falls for one of these lures.

The Los Pollos advertising network promoting itself on LinkedIn.

According to Qurium, TacoLoco is a traffic monetization network that uses deceptive tactics to trick Internet users into enabling “push notifications,” a cross-platform browser standard that allows websites to show pop-up messages which appear outside of the browser. For example, on Microsoft Windows systems these notifications typically show up in the bottom right corner of the screen — just above the system clock.

In the case of VexTrio and TacoLoco, the notification approval requests themselves are deceptive — disguised as “CAPTCHA” challenges designed to distinguish automated bot traffic from real visitors. For years, VexTrio and its partners have successfully tricked countless users into enabling these site notifications, which are then used to continuously pepper the victim’s device with a variety of phony virus alerts and misleading pop-up messages.

Examples of VexTrio landing pages that lead users to accept push notifications on their device.

According to a December 2024 annual report from GoDaddy, nearly 40 percent of compromised websites in 2024 redirected visitors to VexTrio via LosPollos smartlinks.

ADSPRO AND TEKNOLOGY

On November 14, 2024, Qurium published research to support its findings that LosPollos and TacoLoco were services operated by Adspro Group, a company registered in the Czech Republic and Russia, and that Adspro runs its infrastructure at the Swiss hosting providers C41 and Teknology SA.

Qurium noted the LosPollos and TacoLoco sites state that their content is copyrighted by ByteCore AG and SkyForge Digital AG, both Swiss firms that are run by the owner of Teknology SA, Giulio Vitorrio Leonardo Cerutti. Further investigation revealed LosPollos and TacoLoco were apps developed by a company called Holacode, which lists Cerutti as its CEO.

The apps marketed by Holacode include numerous VPN services, as well as one called Spamshield that claims to stop unwanted push notifications. But in January, Infoblox said they tested the app on their own mobile devices, and found it hides the user’s notifications, and then after 24 hours stops hiding them and demands payment. Spamshield subsequently changed its developer name from Holacode to ApLabz, although Infoblox noted that the Terms of Service for several of the rebranded ApLabz apps still referenced Holacode in their terms of service.

Incredibly, Cerutti threatened to sue me for defamation before I’d even uttered his name or sent him a request for comment (Cerutti sent the unsolicited legal threat back in January after his company and my name were merely tagged in an Infoblox post on LinkedIn about VexTrio).

Asked to comment on the findings by Qurium and Infoblox, Cerutti vehemently denied being associated with VexTrio. Cerutti asserted that his companies all strictly adhere to the regulations of the countries in which they operate, and that they have been completely transparent about all of their operations.

“We are a group operating in the advertising and marketing space, with an affiliate network program,” Cerutti responded. “I am not [going] to say we are perfect, but I strongly declare we have no connection with VexTrio at all.”

“Unfortunately, as a big player in this space we also get to deal with plenty of publisher fraud, sketchy traffic, fake clicks, bots, hacked, listed and resold publisher accounts, etc, etc.,” Cerutti continued. “We bleed lots of money to such malpractices and conduct regular internal screenings and audits in a constant battle to remove bad traffic sources. It is also a highly competitive space, where some upstarts will often play dirty against more established mainstream players like us.”

Working with Qurium, researchers at the security firm Infoblox released details about VexTrio’s infrastructure to their industry partners. Just four days after Qurium published its findings, LosPollos announced it was suspending its push monetization service. Less than a month later, Adspro had rebranded to Aimed Global.

A mind map illustrating some of the key findings and connections in the Infoblox and Qurium investigations. Click to enlarge.

A REVEALING PIVOT

In March 2025, researchers at GoDaddy chronicled how DollyWay — a malware strain that has consistently redirected victims to VexTrio throughout its eight years of activity — suddenly stopped doing that on November 20, 2024. Virtually overnight, DollyWay and several other malware families that had previously used VexTrio began pushing their traffic through another TDS called Help TDS.

Digging further into historical DNS records and the unique code scripts used by the Help TDS, Infoblox determined it has long enjoyed an exclusive relationship with VexTrio (at least until LosPollos ended its push monetization service in November).

In a report released today, Infoblox said an exhaustive analysis of the JavaScript code, website lures, smartlinks and DNS patterns used by VexTrio and Help TDS linked them with at least four other TDS operators (not counting TacoLoco). Those four entities — Partners House, BroPush, RichAds and RexPush — are all Russia-based push monetization programs that pay affiliates to drive signups for a variety of schemes, but mostly online dating services.

“As Los Pollos push monetization ended, we’ve seen an increase in fake CAPTCHAs that drive user acceptance of push notifications, particularly from Partners House,” the Infoblox report reads. “The relationship of these commercial entities remains a mystery; while they are certainly long-time partners redirecting traffic to one another, and they all have a Russian nexus, there is no overt common ownership.”

Renee Burton, vice president of threat intelligence at Infoblox, said the security industry generally treats the deceptive methods used by VexTrio and other malicious TDSs as a kind of legally grey area that is mostly associated with less dangerous security threats, such as adware and scareware.

But Burton argues that this view is myopic, and helps perpetuate a dark adtech industry that also pushes plenty of straight-up malware, noting that hundreds of thousands of compromised websites around the world every year redirect victims to the tangled web of VexTrio and VexTrio-affiliate TDSs.

“These TDSs are a nefarious threat, because they’re the ones you can connect to the delivery of things like information stealers and scams that cost consumers billions of dollars a year,” Burton said. “From a larger strategic perspective, my takeaway is that Russian organized crime has control of malicious adtech, and these are just some of the many groups involved.”

WHAT CAN YOU DO?

As KrebsOnSecurity warned way back in 2020, it’s a good idea to be very sparing in approving notifications when browsing the Web. In many cases these notifications are benign, but as we’ve seen there are numerous dodgy firms that are paying site owners to install their notification scripts, and then reselling that communications pathway to scammers and online hucksters.

If you’d like to prevent sites from ever presenting notification requests, all of the major browser makers let you do this — either across the board or on a per-website basis. While it is true that blocking notifications entirely can break the functionality of some websites, doing this for any devices you manage on behalf of your less tech-savvy friends or family members might end up saving everyone a lot of headache down the road.

To modify site notification settings in Mozilla Firefox, navigate to Settings, Privacy & Security, Permissions, and click the “Settings” tab next to “Notifications.” That page will display any notifications already permitted and allow you to edit or delete any entries. Tick the box next to “Block new requests asking to allow notifications” to stop them altogether.

In Google Chrome, click the icon with the three dots to the right of the address bar, scroll all the way down to Settings, Privacy and Security, Site Settings, and Notifications. Select the “Don’t allow sites to send notifications” button if you want to banish notification requests forever.

In Apple’s Safari browser, go to Settings, Websites, and click on Notifications in the sidebar. Uncheck the option to “allow websites to ask for permission to send notifications” if you wish to turn off notification requests entirely.

❌