Normal view

Received before yesterday

Court: “Because Trump said to” may not be a legally valid defense

9 December 2025 at 12:47

On Monday, US District Court Judge Patti Saris vacated a Trump executive order that brought a halt to all offshore wind power development, as well as some projects on land. That order had called for the suspension of all permitting for wind power on federal land and waters pending a review of current practices. This led states and an organization representing wind power companies to sue, claiming among other things that the suspension was arbitrary and capricious.

Over 10 months since the relevant government agencies were ordered to start a re-evaluation of the permitting process, testimony revealed that they had barely begun to develop the concept of a review. As such, the only reason they could offer in defense of the suspension consisted of Trump’s executive order and a Department of the Interior memo implementing it. “Whatever level of explanation is required when deviating from longstanding agency practice,” Judge Saris wrote, “this is not it.”

Lifting Trump’s suspension does not require the immediate approval of any wind projects. Instead, the relevant agencies are likely to continue following Trump’s wishes and slow-walking any leasing and licensing processes, which may force states and project owners to sue individually. But it does provide a legal backdrop for any suits that ultimately occur, one in which the government’s actions have little justification beyond Trump’s personal animosity toward wind power.

Read full article

Comments

© PHILIP FONG

Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach

8 December 2025 at 00:16
food stamp fraud, Geofence, warrant, enforcement, DOJ AI crime

The Washington Post last month reported it was among a list of data breach victims of the Oracle EBS-related vulnerabilities, with a threat actor compromising the data of more than 9,700 former and current employees and contractors. Now, a former worker is launching a class-action lawsuit against the Post, claiming inadequate security.

The post Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach appeared first on Security Boulevard.

Google Uses Courts, Congress to Counter Massive Smishing Campaign

16 November 2025 at 12:05

Google is suing the Smishing Triad group behind the Lighthouse phishing-as-a-service kit that has been used over the past two years to scam more than 1 million people around the world with fraudulent package delivery or EZ-Pass toll fee messages and stealing millions of credit card numbers. Google also is backing bills in Congress to address the threat.

The post Google Uses Courts, Congress to Counter Massive Smishing Campaign appeared first on Security Boulevard.

1 million victims, 17,500 fake sites: Google takes on toll-fee scammers

13 November 2025 at 09:43

A Phishing-as-a-Service (PhaaS) platform based in China, known as “Lighthouse,” is the subject of a new Google lawsuit.

Lighthouse enables smishing (SMS phishing) campaigns, and if you’re in the US there is a good chance you’ve seen their texts about a small amount you supposedly owe in toll fees. Here’s an example of a toll-fee scam text:

Google’s lawsuit brings claims against the Lighthouse platform under federal racketeering and fraud statutes, including the Racketeer Influenced and Corrupt Organizations Act (RICO), the Lanham Act, and the Computer Fraud and Abuse Act.

The texts lure targets to websites that impersonate toll authorities or other trusted organizations. The goal is to steal personal information and credit card numbers for use in further financial fraud.

As we reported in October 2025, Project Red Hook launched to combine the power of the US Homeland Security Investigations (HSI), law enforcement partners, and businesses to raise awareness of how Chinese organized crime groups use gift cards to launder money.

These toll, postage, and refund scams might look different on the surface, but they all feed the same machine, each one crafted to look like an urgent government or service message demanding a small fee. Together, they form an industrialized text-scam ecosystem that’s earned Chinese crime groups more than $1 billion in just three years.

Google says Lighthouse alone affected more than 1 million victims across 120 countries. A September report by Netcraft discussed two phishing campaigns believed to be associated with Lighthouse and “Lucid,” a very similar PhaaS platform. Since identifying these campaigns, Netcraft has detected more than 17,500 phishing domains targeting 316 brands from 74 countries.

As grounds for the lawsuit, Google says it found at least 107 phishing website templates that feature its own branding to boost credibility. But a lawsuit can only go so far, and Google says robust public policy is needed to address the broader threat of scams:

“We are collaborating with policymakers and are today announcing our endorsement of key bipartisan bills in the U.S. Congress.”

Will lawsuits, disruptions, and even bills make toll-fee scams go away? Not very likely. The only thing that will really help is if their source of income dries up because people stop falling for smishing. Education is the biggest lever.

Red flags in smishing messages

There are some tell-tale signs in these scams to look for:

  1. Spelling and grammar mistakes: the scammers seem to have problems with formatting dates. For example “September 10nd”, “9st” (instead of 9th or 1st).
  2. Urgency: you only have one or two days to pay. Or else…
  3. The over-the-top threats: Real agencies won’t say your “credit score will be affected” for an unpaid traffic violation.
  4. Made-up legal codes: “Ohio Administrative Code 15C-16.003” doesn’t match any real Ohio BMV administrative codes. When a code looks fake, it probably is!
  5. Sketchy payment link: Truly trusted organizations don’t send urgent “pay now or else” links by text.
  6. Vague or missing personalization: Genuine government agencies tend to use your legal name, not a generic scare message sent to many people at the same time.

Be alert to scams

Recognizing scams is the most important part of protecting yourself, so always consider these golden rules:

  • Always search phone numbers and email addresses to look for associations with known scams.
  • When in doubt, go directly to the website of the organization that contacted you to see if there are any messages for you.
  • Do not get rushed into decisions without thinking them through.
  • Do not click on links in unsolicited text messages.
  • Do not reply, even if the text message explicitly tells you to do so.

If you have engaged with the scammers’ website:

  • Immediately change your passwords for any accounts that may have been compromised. 
  • Contact your bank or financial institution to report the incident and take any necessary steps to protect your accounts, such as freezing them or monitoring for suspicious activity. 
  • Consider a fraud alert or credit freeze. To start layering protection, you might want to place a fraud alert or credit freeze on your credit file with all three of the primary credit bureaus. This makes it harder for fraudsters to open new accounts in your name.
  • US citizens can report confirmed cases of identity theft to the FTC at identitytheft.gov.

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirements—a point Judge Stein emphasized—the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAI 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAI’s SearchGPT: A Game Changer or Pandora’s Box for Cybersecurity Pros?

Google and Flo to pay $56 million after misusing users’ health data

26 September 2025 at 09:27

Popular period-tracking app Flo Health shared users’ intimate health data—such as menstrual cycles and fertility information—with Google and Meta, allegedly for targeted advertising purposes, according to multiple class-action lawsuits filed in the US and Canada.

Between 2016 and 2019, the developers of Flo Health shared intimate user data with companies including Facebook and Google, mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. 

Google and Flo Health reached settlements with plaintiffs in July, just before the case went to trial. The terms, disclosed this week in San Francisco federal court, stipulate that Google will pay $48 million and Flo Health will pay $8 million to compensate users who entered information about menstruation or pregnancy between November 2016 and February 2019.

In an earlier trial, co-defendant Meta was found liable for violating the California Invasion of Privacy Act by collecting the information of Flo app users without their consent. Meta is expected to appeal the verdict.

The FTC investigated Flo Health and concluded in 2021 that the company misled users about its data privacy practices. This led to a class-action lawsuit which also involved the now-defunct analytics company Flurry, which settled separately for $3.5 million in March.

Flo and Google denied the allegations despite agreeing to pay settlements. Big tech companies have increasingly chosen to settle class action lawsuits while explicitly denying any wrongdoing or legal liability—a common trend in high-profile privacy, antitrust, and data breach cases.

It depicts a worrying trend where big tech pays off victims of privacy violations and other infractions. High-profile class-action lawsuits against, for example, GoogleMeta, and Amazon, grab headlines for holding tech giants accountable. But the only significant winners are often the lawyers, leaving victims to submit personal details yet again in exchange for, at best, a token payout.

By settling, companies can keep a grip on the potential damages and avoid the unpredictability of a jury verdict, which in large classes could reach into billions. Moreover, settlements often resolve legal uncertainty for these corporations without setting a legal precedent that could be used against them in future litigation or regulatory actions.

Looking at it from a cynical perspective, these companies treat such settlements as just another operational expense and continue with their usual practices.

In the long run, such agreements may undermine public trust and accountability, as affected consumers receive minimal compensation but never see a clear acknowledgment of harm or misconduct.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Google settles YouTube lawsuit over kids’ privacy invasion and data collection

21 August 2025 at 07:42

Google has agreed to a $30 million settlement in the US over allegations that it illegally collected data from underage YouTube users for targeted advertising.

The lawsuit claims Google tracked the personal information of children under 13 without proper parental consent, which is a violation of the Children’s Online Privacy Protection Act (COPPA). The tech giant denies any wrongdoing but opted for settlement, according to Reuters.

Does this sound like a re-run episode? There’s a reason you might think that. In 2019, Google settled another case with the US Federal Trade Commission (FTC), paying $170 million for allegedly collecting data from minors on YouTube without parental permission.

Plaintiffs in the recent case argued that despite that prior agreement, Google continued collecting information from children, thereby violating federal laws for years afterward.

Recently, YouTube created some turmoil by testing controversial artificial intelligence (AI) in the US to spot under-18s based on what they watch. To bypass the traditional method of having users fill out their birth dates, the platform is now examining the types of videos watched, search behavior, and account history to assess a user’s age. Whether that’s the way to prevent future lawsuits is questionable.

The class-action suit covers American children under 13 who watched YouTube videos between July 2013 and April 2020. According to the legal team representing the plaintiffs, as many as 35 million to 45 million people may be eligible for compensation. 

With a yearly revenue of $384 billion over 2024, $30 will probably not have a large impact on Google. It may even not outweigh the profits made directly from the violations it was accused of.

How to claim

Based on typical class-action participation rates (1%-10%) the actual number of claimants will likely be in the hundreds of thousands. Those who successfully submit a claim could receive between $10 and $60 each, depending on the final number of validated claims, and before deducting legal fees and costs.

If you believe your child, or you as a minor, might qualify for compensation based on these criteria, here are a few practical steps:

  • Review the eligibility period: Only children under 13 who viewed YouTube videos from July 2013 to April 2020 qualify.
  • Prepare documentation: Gather any records that could prove usage, such as email communications, registration confirmations, or even device logs showing relevant YouTube activity.
  • Monitor official channels: Typically, reputable law firms or consumer protection groups will post claimant instructions soon after a settlement. Avoid clicking on unsolicited emails or links promising easy payouts since these might be scams.
  • Be quick, but careful: Class-action settlements usually have short windows for submitting claims. Act promptly once the process opens but double-check that you’re on an official platform (such as the settlement administration site listed in legal notices).

How to protect your children’s privacy

Digital awareness and proactive security measures should always be top of mind when children use online platforms.

  • Regardless of your involvement in the settlement, it’s wise to check and use privacy settings on children’s devices and turn off personalized ad tracking wherever possible.
  • Some platforms have separate versions for different age groups. Use them where applicable.
  • Show an interest in what your kids are watching. Explaining works better than forbidding without providing reasons.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

❌