Normal view

Received before yesterday

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

9 December 2025 at 16:10

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

Australia’s world-first social media ban for children under age 16 takes effect on December 10, leaving kids scrambling for alternatives and the Australian government with the daunting task of enforcing the ambitious ban. What is the Australian social media ban, who and what services does it cover, and what steps can affected children take? We’ll cover all that, plus the compliance and enforcement challenges facing both social media companies and the Australian government – and the move toward similar bans in other parts of the world.

Australian Social Media Ban Supported by Most – But Not All

In September 2024, Prime Minister Anthony Albanese announced that his government would introduce legislation to set a minimum age requirement for social media because of concerns about the effect of social media on the mental health of children. The amendment to the Online Safety Act 2021 passed in November 2024 with the overwhelming support of the Australian Parliament. The measure has met with overwhelming support – even as most parents say they don’t plan to fully enforce the ban with their children. The law already faces a legal challenge from The Digital Freedom Project, and the Australian Financial Review reported that Reddit may file a challenge too. Services affected by the ban – which proponents call a social media “delay” – include the following 10 services:
  • Facebook
  • Instagram
  • Kick
  • Reddit
  • Snapchat
  • Threads
  • TikTok
  • Twitch
  • X
  • YouTube
Those services must take steps by Wednesday to remove accounts held by users under 16 in Australia and prevent children from registering new accounts. Many services began to comply before the Dec. 10 implementation date, although X had not yet communicated its policy to the government as of Dec. 9, according to The Guardian. Companies that fail to comply with the ban face fines of up to AUD $49.5 million, while there are no penalties for parents or children who fail to comply.

Opposition From a Wide Range of Groups - And Efforts Elsewhere

Opposition to the law has come from a range of groups, including those concerned about the privacy issues resulting from age verification processes such as facial recognition and assessment technology or use of government IDs. Others have said the ban could force children toward darker, less regulated platforms, and one group noted that children often reach out for mental health help on social media. Amnesty International also opposed the ban. The international human rights group called the ban “an ineffective quick fix that’s out of step with the realities of a generation that lives both on and offline.” Amnesty said strong regulation and safeguards would be a better solution. “The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” Amnesty said. “Robust safeguards are needed to ensure social media platforms stop exposing users to harms through their relentless pursuit of user engagement and exploitation of people’s personal data. “Many young people will no doubt find ways to avoid the restrictions,” the group added. “A ban simply means they will continue to be exposed to the same harms but in secret, leaving them at even greater risk.” Even the prestigious medical journal The Lancet suggested that a ban may be too blunt an instrument and that 16-year-olds will still face the same harmful content and risks. Jasmine Fardouly of the University of Sydney School of Psychology noted in a Lancet commentary that “Further government regulations and support for parents and children are needed to help make social media safe for all users while preserving its benefits.” Still, despite the chorus of concerns, the idea of a social media ban for children is catching on in other places, including the EU and Malaysia.

Australian Children Seek Alternatives as Compliance Challenges Loom

The Australian social media ban leaves open a range of options for under-16 users, among them Yope, Lemon8, Pinterest, Discord, WhatsApp, Messenger, iMessage, Signal, and communities that have been sources of controversy such as Telegram and 4chan. Users have exchanged phone numbers with friends and other users, and many have downloaded their personal data from apps where they’ll be losing access, including photos, videos, posts, comments, interactions and platform profile data. Many have investigated VPNs as a possible way around the ban, but a VPN is unlikely to work with an existing account that has already been identified as an underage Australian account. In the meantime, social media services face the daunting task of trying to confirm the age of account holders, a process that even Albanese has acknowledged “won’t be 100 per cent perfect.” There have already been reports of visual age checks failing, and a government-funded report released in August admitted the process will be imperfect. The government has published substantial guidance for helping social media companies comply with the law, but it will no doubt take time to determine what “reasonable steps” to comply look like. In the meantime, social media companies will have to navigate compliance guidance like the following passage: “Providers may choose to offer the option to end-users to provide government-issued identification or use the services of an accredited provider. However, if a provider wants to employ an age assurance method that requires the collection of government-issued identification, then the provider must always offer a reasonable alternative that doesn’t require the collection of government-issued identification. A provider can never require an end-user to give government-issued identification as the sole method of age assurance and must always give end-users an alternative choice if one of the age assurance options is to use government-issued identification. A provider also cannot implement an age assurance system which requires end-users to use the services of an accredited provider without providing the end-user with other choices.”  

‘It has to be genuine’: older influencers drive growth on social media

8 December 2025 at 07:48

As midlife audiences turn to digital media, the 55 to 64 age bracket is an increasingly important demographic

In 2022, Caroline Idiens was on holiday halfway up an Italian mountain when her brother called to tell her to check her Instagram account. “I said, ‘I haven’t got any wifi. And he said: ‘Every time you refresh, it’s adding 500 followers.’ So I had to try to get to the top of the hill with the phone to check for myself.”

A personal trainer from Berkshire who began posting her fitness classes online at the start of lockdown in 2020, Idiens, 53, had already built a respectable following.

Continue reading...

© Photograph: Elena Sigtryggsson

© Photograph: Elena Sigtryggsson

© Photograph: Elena Sigtryggsson

YouTube Releases Its First-Ever Recap of Videos You've Watched

3 December 2025 at 14:19
YouTube has launched its first-ever "Recap" for videos watched on the main platform, giving users personalized cards that showcase their top channels, interests, and a personality type based on their watch habits. The feature rolls out across North America today and globally this week. TechCrunch reports: Users can find their Recap directly on the YouTube homepage or under the "You" tab. Recaps are accessible on mobile devices and desktop. YouTube says the new feature was requested by users and that it conducted over 50 different concept tests before landing on the final product. Alongside the launch of Recap, YouTube also released trend charts showcasing the top creators, podcasts, and songs of the year.

Read more of this story at Slashdot.

SmartTube YouTube App For Android TV Breached To Push Malicious Update

2 December 2025 at 15:20
An anonymous reader quotes a report from BleepingComputer: The popular open-source SmartTube YouTube client for Android TV was compromised after an attacker gained access to the developer's signing keys, leading to a malicious update being pushed to users. The compromise became known when multiple users reported that Play Protect, Android's built-in antivirus module, blocked SmartTube on their devices and warned them of a risk. The developer of SmartTube, Yuriy Yuliskov, admitted that his digital keys were compromised late last week, leading to the injection of malware into the app. Yuliskov revoked the old signature and said he would soon publish a new version with a separate app ID, urging users to move to that one instead. [...] A user who reverse-engineered the compromised SmartTube version number 30.51 found that it includes a hidden native library named libalphasdk.so [VirusTotal]. This library does not exist in the public source code, so it is being injected into release builds. [...] The library runs silently in the background without user interaction, fingerprints the host device, registers it with a remote backend, and periodically sends metrics and retrieves configuration via an encrypted communications channel. All this happens without any visible indication to the user. While there's no evidence of malicious activity such as account theft or participation in DDoS botnets, the risk of enabling such activities at any time is high.

Read more of this story at Slashdot.

He got sued for sharing public YouTube videos; nightmare ended in settlement

19 November 2025 at 15:56

Nobody expects to get sued for re-posting a YouTube video on social media by using the “share” button, but librarian Ian Linkletter spent the past five years embroiled in a copyright fight after doing just that.

Now that a settlement has been reached, Linkletter told Ars why he thinks his 2020 tweets sharing public YouTube videos put a target on his back.

Linkletter’s legal nightmare started in 2020 after an education technology company, Proctorio, began monitoring student backlash on Reddit over its AI tool used to remotely scan rooms, identify students, and prevent cheating on exams. On Reddit, students echoed serious concerns raised by researchers, warning of privacy issues, racist and sexist biases, and barriers to students with disabilities.

Read full article

Comments

© Ashley Linkletter

Meta wins monopoly trial, convinces judge that social networking is dead

18 November 2025 at 16:47

After years of pushback from the Federal Trade Commission over Meta’s acquisitions of Instagram and WhatsApp, Meta has defeated the FTC’s monopoly claims.

In a Tuesday ruling, US District Judge James Boasberg said the FTC failed to show that Meta has a monopoly in a market dubbed “personal social networking.” In that narrowly defined market, the FTC unsuccessfully argued, Meta supposedly faces only two rivals, Snapchat and MeWe, which struggle to compete due to its alleged monopoly.

But the days of grouping apps into “separate markets of social networking and social media” are over, Boasberg wrote. He cited the Greek philosopher Heraclitus, who “posited that no man can ever step into the same river twice,” while telling the FTC they missed their chance to block Meta’s purchase.

Read full article

Comments

© Bloomberg / Contributor | Bloomberg

Google settles YouTube lawsuit over kids’ privacy invasion and data collection

21 August 2025 at 07:42

Google has agreed to a $30 million settlement in the US over allegations that it illegally collected data from underage YouTube users for targeted advertising.

The lawsuit claims Google tracked the personal information of children under 13 without proper parental consent, which is a violation of the Children’s Online Privacy Protection Act (COPPA). The tech giant denies any wrongdoing but opted for settlement, according to Reuters.

Does this sound like a re-run episode? There’s a reason you might think that. In 2019, Google settled another case with the US Federal Trade Commission (FTC), paying $170 million for allegedly collecting data from minors on YouTube without parental permission.

Plaintiffs in the recent case argued that despite that prior agreement, Google continued collecting information from children, thereby violating federal laws for years afterward.

Recently, YouTube created some turmoil by testing controversial artificial intelligence (AI) in the US to spot under-18s based on what they watch. To bypass the traditional method of having users fill out their birth dates, the platform is now examining the types of videos watched, search behavior, and account history to assess a user’s age. Whether that’s the way to prevent future lawsuits is questionable.

The class-action suit covers American children under 13 who watched YouTube videos between July 2013 and April 2020. According to the legal team representing the plaintiffs, as many as 35 million to 45 million people may be eligible for compensation. 

With a yearly revenue of $384 billion over 2024, $30 will probably not have a large impact on Google. It may even not outweigh the profits made directly from the violations it was accused of.

How to claim

Based on typical class-action participation rates (1%-10%) the actual number of claimants will likely be in the hundreds of thousands. Those who successfully submit a claim could receive between $10 and $60 each, depending on the final number of validated claims, and before deducting legal fees and costs.

If you believe your child, or you as a minor, might qualify for compensation based on these criteria, here are a few practical steps:

  • Review the eligibility period: Only children under 13 who viewed YouTube videos from July 2013 to April 2020 qualify.
  • Prepare documentation: Gather any records that could prove usage, such as email communications, registration confirmations, or even device logs showing relevant YouTube activity.
  • Monitor official channels: Typically, reputable law firms or consumer protection groups will post claimant instructions soon after a settlement. Avoid clicking on unsolicited emails or links promising easy payouts since these might be scams.
  • Be quick, but careful: Class-action settlements usually have short windows for submitting claims. Act promptly once the process opens but double-check that you’re on an official platform (such as the settlement administration site listed in legal notices).

How to protect your children’s privacy

Digital awareness and proactive security measures should always be top of mind when children use online platforms.

  • Regardless of your involvement in the settlement, it’s wise to check and use privacy settings on children’s devices and turn off personalized ad tracking wherever possible.
  • Some platforms have separate versions for different age groups. Use them where applicable.
  • Show an interest in what your kids are watching. Explaining works better than forbidding without providing reasons.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Big Tech’s Mixed Response to U.S. Treasury Sanctions

3 July 2025 at 12:06

In May 2025, the U.S. government sanctioned a Chinese national for operating a cloud provider linked to the majority of virtual currency investment scam websites reported to the FBI. But a new report finds the accused continues to operate a slew of established accounts at American tech companies — including Facebook, Github, PayPal and Twitter/X.

On May 29, the U.S. Department of the Treasury announced economic sanctions against Funnull Technology Inc., a Philippines-based company alleged to provide infrastructure for hundreds of thousands of websites involved in virtual currency investment scams known as “pig butchering.” In January 2025, KrebsOnSecurity detailed how Funnull was designed as a content delivery network that catered to foreign cybercriminals seeking to route their traffic through U.S.-based cloud providers.

The Treasury also sanctioned Funnull’s alleged operator, a 40-year-old Chinese national named Liu “Steve” Lizhi. The government says Funnull directly facilitated financial schemes resulting in more than $200 million in financial losses by Americans, and that the company’s operations were linked to the majority of pig butchering scams reported to the FBI.

It is generally illegal for U.S. companies or individuals to transact with people sanctioned by the Treasury. However, as Mr. Lizhi’s case makes clear, just because someone is sanctioned doesn’t necessarily mean big tech companies are going to suspend their online accounts.

The government says Lizhi was born November 13, 1984, and used the nicknames “XXL4” and “Nice Lizhi.” Nevertheless, Steve Liu’s 17-year-old account on LinkedIn (in the name “Liulizhi”) had hundreds of followers (Lizhi’s LinkedIn profile helpfully confirms his birthday) until quite recently: The account was deleted this morning, just hours after KrebsOnSecurity sought comment from LinkedIn.

Mr. Lizhi’s LinkedIn account was suspended sometime in the last 24 hours, after KrebsOnSecurity sought comment from LinkedIn.

In an emailed response, a LinkedIn spokesperson said the company’s “Prohibited countries policy” states that LinkedIn “does not sell, license, support or otherwise make available its Premium accounts or other paid products and services to individuals and companies sanctioned by the U.S. government.” LinkedIn declined to say whether the profile in question was a premium or free account.

Mr. Lizhi also maintains a working PayPal account under the name Liu Lizhi and username “@nicelizhi,” another nickname listed in the Treasury sanctions. A 15-year-old Twitter/X account named “Lizhi” that links to Mr. Lizhi’s personal domain remains active, although it has few followers and hasn’t posted in years.

These accounts and many others were flagged by the security firm Silent Push, which has been tracking Funnull’s operations for the past year and calling out U.S. cloud providers like Amazon and Microsoft for failing to more quickly sever ties with the company.

Liu Lizhi’s PayPal account.

In a report released today, Silent Push found Lizhi still operates numerous Facebook accounts and groups, including a private Facebook account under the name Liu Lizhi. Another Facebook account clearly connected to Lizhi is a tourism page for Ganzhou, China called “EnjoyGanzhou” that was named in the Treasury Department sanctions.

“This guy is the technical administrator for the infrastructure that is hosting a majority of scams targeting people in the United States, and hundreds of millions have been lost based on the websites he’s been hosting,” said Zach Edwards, senior threat researcher at Silent Push. “It’s crazy that the vast majority of big tech companies haven’t done anything to cut ties with this guy.”

The FBI says it received nearly 150,000 complaints last year involving digital assets and $9.3 billion in losses — a 66 percent increase from the previous year. Investment scams were the top crypto-related crimes reported, with $5.8 billion in losses.

In a statement, a Meta spokesperson said the company continuously takes steps to meet its legal obligations, but that sanctions laws are complex and varied. They explained that sanctions are often targeted in nature and don’t always prohibit people from having a presence on its platform. Nevertheless, Meta confirmed it had removed the account, unpublished Pages, and removed Groups and events associated with the user for violating its policies.

Attempts to reach Mr. Lizhi via his primary email addresses at Hotmail and Gmail bounced as undeliverable. Likewise, his 14-year-old YouTube channel appears to have been taken down recently.

However, anyone interested in viewing or using Mr. Lizhi’s 146 computer code repositories will have no problem finding GitHub accounts for him, including one registered under the NiceLizhi and XXL4 nicknames mentioned in the Treasury sanctions.

One of multiple GitHub profiles used by Liu “Steve” Lizhi, who uses the nickname XXL4 (a moniker listed in the Treasury sanctions for Mr. Lizhi).

Mr. Lizhi also operates a GitHub page for an open source e-commerce platform called NexaMerchant, which advertises itself as a payment gateway working with numerous American financial institutions. Interestingly, this profile’s “followers” page shows several other accounts that appear to be Mr. Lizhi’s. All of the account’s followers are tagged as “suspended,” even though that suspended message does not display when one visits those individual profiles.

In response to questions, GitHub said it has a process in place to identify when users and customers are Specially Designated Nationals or other denied or blocked parties, but that it locks those accounts instead of removing them. According to its policy, GitHub takes care that users and customers aren’t impacted beyond what is required by law.

All of the follower accounts for the XXL4 GitHub account appear to be Mr. Lizhi’s, and have been suspended by GitHub, but their code is still accessible.

“This includes keeping public repositories, including those for open source projects, available and accessible to support personal communications involving developers in sanctioned regions,” the policy states. “This also means GitHub will advocate for developers in sanctioned regions to enjoy greater access to the platform and full access to the global open source community.”

Edwards said it’s great that GitHub has a process for handling sanctioned accounts, but that the process doesn’t seem to communicate risk in a transparent way, noting that the only indicator on the locked accounts is the message, “This repository has been archived by the owner. It is not read-only.”

“It’s an odd message that doesn’t communicate, ‘This is a sanctioned entity, don’t fork this code or use it in a production environment’,” Edwards said.

Mark Rasch is a former federal cybercrime prosecutor who now serves as counsel for the New York City based security consulting firm Unit 221B. Rasch said when Treasury’s Office of Foreign Assets Control (OFAC) sanctions a person or entity, it then becomes illegal for businesses or organizations to transact with the sanctioned party.

Rasch said financial institutions have very mature systems for severing accounts tied to people who become subject to OFAC sanctions, but that tech companies may be far less proactive — particularly with free accounts.

“Banks have established ways of checking [U.S. government sanctions lists] for sanctioned entities, but tech companies don’t necessarily do a good job with that, especially for services that you can just click and sign up for,” Rasch said. “It’s potentially a risk and liability for the tech companies involved, but only to the extent OFAC is willing to enforce it.”

Liu Lizhi operates numerous Facebook accounts and groups, including this one for an entity specified in the OFAC sanctions: The “Enjoy Ganzhou” tourism page for Ganzhou, China. Image: Silent Push.

In July 2024, Funnull purchased the domain polyfill[.]io, the longtime home of a legitimate open source project that allowed websites to ensure that devices using legacy browsers could still render content in newer formats. After the Polyfill domain changed hands, at least 384,000 websites were caught in a supply-chain attack that redirected visitors to malicious sites. According to the Treasury, Funnull used the code to redirect people to scam websites and online gambling sites, some of which were linked to Chinese criminal money laundering operations.

The U.S. government says Funnull provides domain names for websites on its purchased IP addresses, using domain generation algorithms (DGAs) — programs that generate large numbers of similar but unique names for websites — and that it sells web design templates to cybercriminals.

“These services not only make it easier for cybercriminals to impersonate trusted brands when creating scam websites, but also allow them to quickly change to different domain names and IP addresses when legitimate providers attempt to take the websites down,” reads a Treasury statement.

Meanwhile, Funnull appears to be morphing nearly all aspects of its business in the wake of the sanctions, Edwards said.

“Whereas before they might have used 60 DGA domains to hide and bounce their traffic, we’re seeing far more now,” he said. “They’re trying to make their infrastructure harder to track and more complicated, so for now they’re not going away but more just changing what they’re doing. And a lot more organizations should be holding their feet to the fire.”

Update, 2:48 PM ET: Added response from Meta, which confirmed it has closed the accounts and groups connected to Mr. Lizhi.

Update, July 7, 6:56 p.m. ET: In a written statement, PayPal said it continually works to combat and prevent the illicit use of its services.

“We devote significant resources globally to financial crime compliance, and we proactively refer cases to and assist law enforcement officials around the world in their efforts to identify, investigate and stop illegal activity,” the statement reads.

❌