Normal view

Received before yesterday

Child exploitation, grooming, and social media addiction claims put Meta on trial

12 February 2026 at 07:35

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Child exploitation, grooming, and social media addiction claims put Meta on trial

12 February 2026 at 07:35

Landmark trials now underway allege Meta failed to protect children from sexual exploitation, grooming, and addiction-driven design.

The post Child exploitation, grooming, and social media addiction claims put Meta on trial appeared first on Security Boulevard.

​AI slop, begone! The viral musical virtuosos bringing brains and brilliance back to social media

11 February 2026 at 05:38

Whether making microtonal pop or playing Renaissance instruments with sheep bones, a crop of bold artists are making genuinely strange music go mainstream – but are they at the mercy of the algorithm?

Chloë Sobek is a Melbourne musician who plays the violone, a Renaissance precursor to the double bass. But instead of playing it in the traditional manner, she puts wobbling bits of cardboard between its strings or uses a sheep’s bone as a bow, and these weird interventions have become catnip for Instagram’s algorithm, getting her tens of thousands – sometimes hundreds of thousands – of views for each of her self-made performance videos. “Despite how it might appear, I’m a reasonably shy person,” she says.

When Laurie Anderson’s robo-minimalist masterwork O Superman hit No 2 in the UK charts in 1981, thanks to incessant airplay on John Peel’s radio show, it was a signal of a media outlet’s power to propel experimental music into the mainstream. That’s now happening again as prepared-instrument players such as Sobek, plus experimental pianists, microtonal singers and numerous other boundary-pushing solo performers, are routinely breaking out of underground circles thanks to videos – generally self-recorded at home – going viral on TikTok and Instagram.

Continue reading...

© Photograph: Sandra Ebert

© Photograph: Sandra Ebert

© Photograph: Sandra Ebert

Meta confirms it’s working on premium subscription for its apps

29 January 2026 at 16:06

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Meta blocks links to ICE List across Facebook, Instagram, and Threads

28 January 2026 at 12:22

Meta has started blocking its users from sharing links to ICE List, a website that has compiled the names of what it claims are Department of Homeland Security employees, a project the creators say is designed to hold those employees accountable.

Dominick Skinner, the creator of ICE List, tells WIRED that links to the website have been shared without issue on Meta’s platforms for more than six months.

“I think it's no surprise that a company run by a man who sat behind Trump at his inauguration, and donated to the destruction of the White House, has taken a stance that helps ICE agents retain anonymity,” says Skinner.

Read full article

Comments

© Will Oliver/Getty Images

“IG is a drug”: Internal messages may doom Meta at social media addiction trial

27 January 2026 at 13:07

Anxiety, depression, eating disorders, and death. These can be the consequences for vulnerable kids who get addicted to social media, according to more than 1,000 personal injury lawsuits that seek to punish Meta and other platforms for allegedly prioritizing profits while downplaying child safety risks for years.

Social media companies have faced scrutiny before, with congressional hearings forcing CEOs to apologize, but until now, they've never had to convince a jury that they aren't liable for harming kids.

This week, the first high-profile lawsuit—considered a "bellwether" case that could set meaningful precedent in the hundreds of other complaints—goes to trial. That lawsuit documents the case of a 19-year-old, K.G.M, who hopes the jury will agree that Meta and YouTube caused psychological harm by designing features like infinite scroll and autoplay to push her down a path that she alleged triggered depression, anxiety, self-harm, and suicidality.

Read full article

Comments

Received an Instagram password reset email? Here’s what you need to know

12 January 2026 at 16:04

Last week, many Instagram users began receiving unsolicited emails from the platform that warned about a password reset request.

The message said:

“Hi {username},
We got a request to reset your Instagram password.
If you ignore this message, your password will not be changed. If you didn’t request a password reset, let us know.”

Around the same time that users began receiving these emails, a cybercriminal using the handle “Solonik” offered data that alleged contains information about 17 million Instagram users for sale on a Dark Web forum.

These 17 million or so records include:

  • Usernames
  • Full names
  • User IDs
  • Email addresses
  • Phone numbers
  • Countries
  • Partial locations

Please note that there are no passwords listed in the data.

Despite the timing of the two events, Instagram denied this weekend that these events are related. On the platform X, the company stated they fixed an issue that allowed an external party to request password reset emails for “some people.”

So, what’s happening?

Regarding the data found on the dark web last week, Shahak Shalev, global head of scam and AI research at Malwarebytes, shared that “there are some indications that the Instagram data dump includes data from other, older, alleged Instagram breaches, and is a sort of compilation.” As Shalev’s team investigates the data, he also said that the earliest password reset requests reported by users came days before the data was first posted on the dark web, which might mean that “the data may have been circulating in more private groups before being made public.”

However, another possibility, Shalev said, is that “another vulnerability/data leak was happening as some bad actor tried spraying for [Instagram] accounts. Instagram’s announcement seems to reference that spraying. Besides the suspicious timing, there’s no clear connection between the two at this time.”

But, importantly, scammers will not care whether these incidents are related or not. They will try to take advantage of the situation by sending out fake emails.

“We felt it was important to alert people about the data availability so that everyone could reset their passwords, directly from the app, and be on alert for other phishing communications,” Shalev said.

If and when we find out more, we’ll keep you posted, so stay tuned.

How to stay safe

If you have enabled 2FA on your Instagram account, we think it is indeed safe to ignore the emails, as proposed by Meta.

Should you want to err on the safe side and decide to change your password, make sure to do so in the app and not click any links in the email, to avoid the risk that you have received a fake email. Or you might end up providing scammers with your password.

Another thing to keep in mind is that these are Meta-data. Which means some users may have reused or linked them to their Facebook or WhatsApp accounts. So, as a precaution, you can check recent logins and active sessions on Instagram, WhatsApp, and Facebook, and log out from any devices or locations you do not recognize.

If you want to find out whether your data was included in an Instagram data breach, or any other for that matter, try our free Digital Footprint scan.

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

9 December 2025 at 16:10

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

Australia’s world-first social media ban for children under age 16 takes effect on December 10, leaving kids scrambling for alternatives and the Australian government with the daunting task of enforcing the ambitious ban. What is the Australian social media ban, who and what services does it cover, and what steps can affected children take? We’ll cover all that, plus the compliance and enforcement challenges facing both social media companies and the Australian government – and the move toward similar bans in other parts of the world.

Australian Social Media Ban Supported by Most – But Not All

In September 2024, Prime Minister Anthony Albanese announced that his government would introduce legislation to set a minimum age requirement for social media because of concerns about the effect of social media on the mental health of children. The amendment to the Online Safety Act 2021 passed in November 2024 with the overwhelming support of the Australian Parliament. The measure has met with overwhelming support – even as most parents say they don’t plan to fully enforce the ban with their children. The law already faces a legal challenge from The Digital Freedom Project, and the Australian Financial Review reported that Reddit may file a challenge too. Services affected by the ban – which proponents call a social media “delay” – include the following 10 services:
  • Facebook
  • Instagram
  • Kick
  • Reddit
  • Snapchat
  • Threads
  • TikTok
  • Twitch
  • X
  • YouTube
Those services must take steps by Wednesday to remove accounts held by users under 16 in Australia and prevent children from registering new accounts. Many services began to comply before the Dec. 10 implementation date, although X had not yet communicated its policy to the government as of Dec. 9, according to The Guardian. Companies that fail to comply with the ban face fines of up to AUD $49.5 million, while there are no penalties for parents or children who fail to comply.

Opposition From a Wide Range of Groups - And Efforts Elsewhere

Opposition to the law has come from a range of groups, including those concerned about the privacy issues resulting from age verification processes such as facial recognition and assessment technology or use of government IDs. Others have said the ban could force children toward darker, less regulated platforms, and one group noted that children often reach out for mental health help on social media. Amnesty International also opposed the ban. The international human rights group called the ban “an ineffective quick fix that’s out of step with the realities of a generation that lives both on and offline.” Amnesty said strong regulation and safeguards would be a better solution. “The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” Amnesty said. “Robust safeguards are needed to ensure social media platforms stop exposing users to harms through their relentless pursuit of user engagement and exploitation of people’s personal data. “Many young people will no doubt find ways to avoid the restrictions,” the group added. “A ban simply means they will continue to be exposed to the same harms but in secret, leaving them at even greater risk.” Even the prestigious medical journal The Lancet suggested that a ban may be too blunt an instrument and that 16-year-olds will still face the same harmful content and risks. Jasmine Fardouly of the University of Sydney School of Psychology noted in a Lancet commentary that “Further government regulations and support for parents and children are needed to help make social media safe for all users while preserving its benefits.” Still, despite the chorus of concerns, the idea of a social media ban for children is catching on in other places, including the EU and Malaysia.

Australian Children Seek Alternatives as Compliance Challenges Loom

The Australian social media ban leaves open a range of options for under-16 users, among them Yope, Lemon8, Pinterest, Discord, WhatsApp, Messenger, iMessage, Signal, and communities that have been sources of controversy such as Telegram and 4chan. Users have exchanged phone numbers with friends and other users, and many have downloaded their personal data from apps where they’ll be losing access, including photos, videos, posts, comments, interactions and platform profile data. Many have investigated VPNs as a possible way around the ban, but a VPN is unlikely to work with an existing account that has already been identified as an underage Australian account. In the meantime, social media services face the daunting task of trying to confirm the age of account holders, a process that even Albanese has acknowledged “won’t be 100 per cent perfect.” There have already been reports of visual age checks failing, and a government-funded report released in August admitted the process will be imperfect. The government has published substantial guidance for helping social media companies comply with the law, but it will no doubt take time to determine what “reasonable steps” to comply look like. In the meantime, social media companies will have to navigate compliance guidance like the following passage: “Providers may choose to offer the option to end-users to provide government-issued identification or use the services of an accredited provider. However, if a provider wants to employ an age assurance method that requires the collection of government-issued identification, then the provider must always offer a reasonable alternative that doesn’t require the collection of government-issued identification. A provider can never require an end-user to give government-issued identification as the sole method of age assurance and must always give end-users an alternative choice if one of the age assurance options is to use government-issued identification. A provider also cannot implement an age assurance system which requires end-users to use the services of an accredited provider without providing the end-user with other choices.”  

How to set up two factor authentication (2FA) on your Instagram account

27 October 2025 at 10:53

Two-factor authentication (2FA) isn’t foolproof, but it is one of the best ways to protect your accounts from hackers.

It adds a small extra step when logging in, but that extra effort pays off. Instagram’s 2FA requires an additional code whenever you try to log in from an unrecognized device or browser—stopping attackers even if they have your password.

Instagram offers multiple 2FA options: text message (SMS), an authentication app (recommended), or a security key.

Instagram 2FA options

Here’s how to enable 2FA on Instagram for Android, iPhone/iPad, and the web.

How to set up 2FA for Instagram on Android

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon (three horizontal lines) in the top right.
  4. Select Accounts Center at the bottom.
  5. Tap Password and security > Two-factor authentication.
  6. Choose your Instagram account.
  7. Select a verification method: Text message (SMS), Authentication app (recommended), or WhatsApp.
    • SMS: Enter your phone number if you haven’t already. Instagram will send you a six-digit code. Enter it to confirm.
    • Authentication app: Choose an app like Google Authenticator or Duo Mobile. Scan the QR code or copy the setup key, then enter the generated code on Instagram.
    • WhatsApp: Enable text message security first, then link your WhatsApp number.
  8. Follow the on-screen instructions to finish setup.

How to set up 2FA for Instagram on iPhone or iPad

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon > Settings > Security > Two-factor authentication.
  4. Tap Get Started.
  5. Choose Authentication app (recommended), Text message, or WhatsApp.
    • Authentication app: Copy the setup key or scan the QR code with your chosen app. Enter the generated code and tap Next.
    • Text message: Turn it on, then enter the six-digit SMS code Instagram sends you.
    • WhatsApp: Enable text message first, then add WhatsApp.
  6. Follow on-screen instructions to complete the setup.

How to set up 2FA for Instagram in a web browser

  1. Go to instagram.com and log in.
  2. Open Accounts Center > Password and security.
  3. Click Two-factor authentication, then choose your account.
    • Note: If your accounts are linked, you can enable 2FA for both Instagram and your overall Meta account here.Instagram accoounts center
  4. Choose your preferred 2FA method and follow the online prompts.

Enable it today

Even the strongest password isn’t enough on its own. 2FA means a thief must have access to your an additional factor to be able to log in to your account, whether that’s a code on a physical device or a security key. That makes it far harder for criminals to break in.

Turn on 2FA for all your important accounts, especially social media and messaging apps. It only takes a few minutes, but it could save you hours—or even days—of recovery later.It’s currently the best password advice we have.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Affiliates Flock to ‘Soulless’ Scam Gambling Machine

28 August 2025 at 13:21

Last month, KrebsOnSecurity tracked the sudden emergence of hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. We’ve since learned that these scam gambling sites have proliferated thanks to a new Russian affiliate program called “Gambler Panel” that bills itself as a “soulless project that is made for profit.”

A machine-translated version of Gambler Panel’s affiliate website.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular athletes or social media personalities. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.

The gaming sites ask visitors to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. However, when users try to cash out any “winnings” the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed.

Those who deposit cryptocurrency funds are soon pressed into more wagering and making additional deposits. And — shocker alert — all players eventually lose everything they’ve invested in the platform.

The number of scam gambling or “scambling” sites has skyrocketed in the past month, and now we know why: The sites all pull their gaming content and detailed strategies for fleecing players straight from the playbook created by Gambler Panel, a Russian-language affiliate program that promises affiliates up to 70 percent of the profits.

Gambler Panel’s website gambler-panel[.]com links to a helpful wiki that explains the scam from cradle to grave, offering affiliates advice on how best to entice visitors, keep them gambling, and extract maximum profits from each victim.

“We have a completely self-written from scratch FAKE CASINO engine that has no competitors,” Gambler Panel’s wiki enthuses. “Carefully thought-out casino design in every pixel, a lot of audits, surveys of real people and test traffic floods were conducted, which allowed us to create something that has no doubts about the legitimacy and trustworthiness even for an inveterate gambling addict with many years of experience.”

Gambler Panel explains that the one and only goal of affiliates is to drive traffic to these scambling sites by any and all means possible.

A machine-translated portion of Gambler Panel’s singular instruction for affiliates: Drive traffic to these scambling sites by any means available.

“Unlike white gambling affiliates, we accept absolutely any type of traffic, regardless of origin, the only limitation is the CIS countries,” the wiki continued, referring to a common prohibition against scamming people in Russia and former Soviet republics in the Commonwealth of Independent States.

The program’s website claims it has more than 20,000 affiliates, who earn a minimum of $10 for each verification deposit. Interested new affiliates must first get approval from the group’s Telegram channel, which currently has around 2,500 active users.

The Gambler Panel channel is replete with images of affiliate panels showing the daily revenue of top affiliates, scantily-clad young women promoting the Gambler logo, and fast cars that top affiliates claimed they bought with their earnings.

A machine-translated version of the wiki for the affiliate program Gambler Panel.

The apparent popularity of this scambling niche is a consequence of the program’s ease of use and detailed instructions for successfully reproducing virtually every facet of the scam. Indeed, much of the tutorial focuses on advice and ready-made templates to help even novice affiliates drive traffic via social media websites, particularly on Instagram and TikTok.

Gambler Panel also walks affiliates through a range of possible responses to questions from users who are trying to withdraw funds from the platform. This section, titled “Rules for working in Live chat,” urges scammers to respond quickly to user requests (1-7 minutes), and includes numerous strategies for keeping the conversation professional and the user on the platform as long as possible.

A machine-translated version of the Gambler Panel’s instructions on managing chat support conversations with users.

The connection between Gambler Panel and the explosion in the number of scambling websites was made by a 17-year-old developer who operates multiple Discord servers that have been flooded lately with misleading ads for these sites.

The researcher, who asked to be identified only by the nickname “Thereallo,” said Gambler Panel has built a scalable business product for other criminals.

“The wiki is kinda like a ‘how to scam 101’ for criminals written with the clarity you would expect from a legitimate company,” Thereallo said. “It’s clean, has step by step guides, and treats their scam platform like a real product. You could swap out the content, and it could be any documentation for startups.”

“They’ve minimized their own risk — spreading the links on Discord / Facebook / YT Shorts, etc. — and outsourced it to a hungry affiliate network, just like a franchise,” Thereallo wrote in response to questions.

“A centralized platform that can serve over 1,200 domains with a shared user base, IP tracking, and a custom API is not at all a trivial thing to build,” Thereallo said. “It’s a scalable system designed to be a resilient foundation for thousands of disposable scam sites.”

The security firm Silent Push has compiled a list of the latest domains associated with the Gambler Panel, available here (.csv).

❌