Normal view

Received before yesterday

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

European Court Imposes Strict New Data Checks on Online Marketplace Ads

3 December 2025 at 00:34

CJEU ruling

The CJEU ruling by the Court of Justice of the European Union on Tuesday has made it clear that online marketplaces are responsible for the personal data that appears in advertisements on their platforms. The Court of Justice of the European Union decision makes clear that platforms must get consent from any person whose data is shown in an advertisement, and must verify ads before they go live, especially where sensitive data is involved. The CJEU ruling comes from a 2018 incident in Romania. A fake advertisement on the classifieds website publi24.ro claimed a woman was offering sexual services. The post included her photos and phone number, which were used without her permission. The operator of the site, Russmedia Digital, removed the ad within an hour, but by then it had already been copied to other websites. The woman said the ad harmed her privacy and reputation and took the company to court. Lower courts in Romania gave different decisions, so the case was referred to the Court of Justice of the European Union for clarity. The CJEU has now confirmed that online marketplaces are data controllers under the GDPR for the personal data contained in ads on their sites.

CJEU Ruling: What Online Marketplaces Must Do Now

The court said that marketplace operators must take more responsibility and cannot rely on old rules that protect hosting services from liability. From now on, platforms must:
  • Check ads before publishing them when they contain personal or sensitive data.
  • Confirm that the person posting the ad is the same person shown in the ad, or make sure the person shown has given explicit consent.
  • Refuse ads if consent or identity verification cannot be confirmed.
  • Put measures in place to help prevent sensitive ads from being copied and reposted on other websites.
These steps must be part of the platform’s regular technical and organisational processes to comply with the GDPR.

What This Means for Platforms Across The EU

Legal teams at Pinsent Masons warned the decision “will likely have major implications for data protection across the 27 member states.” Nienke Kingma of Pinsent Masons said the ruling is important for compliance, adding it is “setting a new standard for data protection compliance across the EU.” Thijs Kelder, also at Pinsent Masons, said: “This judgment makes clear that online marketplaces cannot avoid their obligations under the GDPR,” and noted the decision “increases the operational risks on these platforms,” meaning companies will need stronger risk management. Daphne Keller of Stanford Law School warned about wider effects on free expression and platform design, noting the ruling “has major implications for free expression and access to information, age verification and privacy.”

Practical Impact

The CJEU ruling decision marks a major shift in how online marketplaces must operate. Platforms that allow users to post adverts will now have to rethink their processes, from verifying identities and checking personal data before an ad goes live to updating their terms and investing in new technical controls. Smaller platforms may feel the pressure most, as the cost of building these checks could be significant. What happens next will depend on how national data protection authorities interpret the ruling and how quickly companies can adapt. The coming months will reveal how verification should work in practice, what measures count as sufficient protection against reposting, and how platforms can balance these new duties with user privacy and free expression. The ruling sets a strict new standard, and its real impact will become clearer as regulators, courts, and platforms begin to implement it.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

China Updates Cybersecurity Law to Address AI and Infrastructure Risks

CSL

China has announced amendments to its Cybersecurity Law (CSL), marking the first major overhaul of the framework since its enactment in 2017. The revisions, approved by the Standing Committee of the National People’s Congress in October 2025, are aimed at enhancing artificial intelligence (AI) safety, strengthening enforcement mechanisms, and clarifying incident reporting obligations for onshore infrastructure.   The updated cybersecurity law will officially take effect on January 1, 2026. 

CSL Updates Strengthen AI Governance and National Security

One of the most notable updates to the CSL is the inclusion of a new article emphasizing state support for AI development and safety. This addition is the first explicit mention of artificial intelligence within China’s cybersecurity framework.   At the same time, the amendment stresses the importance of establishing ethical standards and safety oversight mechanisms for AI technologies. The new provisions encourage the use of AI and other technologies to improve cybersecurity management, signaling a growing recognition of AI’s dual role as both an enabler of progress and a potential source of risk  While the revised cybersecurity law articulates strategic priorities, detailed implementation guidelines are expected to follow with future regulations or technical standards, reported Global Policy Watch.

Expanding Enforcement and Liability

The 2025 amendments introduce stricter enforcement measures and higher penalties for violations under the CSL. Companies and individuals found in serious breach of the law could face increased fines, up to RMB 10 million for organizations and RMB 1 million for individuals. The revisions also broaden liability to include additional categories of violations, reflecting China’s ongoing efforts to strengthen accountability across its digital ecosystem.  Moreover, the updated cybersecurity law expands its extraterritorial reach. Previously, the CSL’s jurisdiction over cross-border cyber incidents was limited to foreign actions harming China’s critical information infrastructure (CII). The new amendments extend coverage to any foreign conduct that endangers the country’s network security, regardless of whether it targets CII. In severe cases, authorities may impose sanctions such as asset freezes or other punitive measures. 

Clarifying Data Protection Obligations

The amendments also resolve a long-standing ambiguity surrounding personal data processing. Under the revised CSL, network operators are now explicitly required to comply not only with the cybersecurity law itself but also with the Civil Code and the Personal Information Protection Law (PIPL). This clarification reinforces the interconnected nature of China’s data governance regime and provides clearer guidance for companies handling personal information.  Complementing the CSL amendments, the Cyberspace Administration of China (CAC) issued the Administrative Measures for National Cybersecurity Incident Reporting, which will come into force on November 1, 2025. These new reporting measures consolidate previously scattered requirements into a unified framework, creating clearer operational expectations for organizations managing onshore infrastructure.  The Measures apply to all network operators that build or operate networks within China or provide services through Chinese networks. Notably, the rules appear to exclude offshore incidents, even when they affect Chinese users, suggesting that the primary focus remains on domestic cybersecurity resilience. 

Defined Thresholds and Reporting Procedures

Under the new system, cybersecurity incidents are classified into four levels of severity. Operators must report “relatively major” incidents, such as data breaches involving more than one million individuals or economic losses exceeding RMB 5 million (approximately USD 700,000), within four hours of discovery. A preliminary report must be followed by a full assessment within 72 hours and a post-incident review within 30 days of resolution.  The CAC has introduced multiple reporting channels, including a dedicated hotline, website, email, and WeChat platform, to simplify compliance. Failure to report, delayed notifications, or false reporting can result in penalties. Conversely, prompt and transparent reporting may mitigate or eliminate liability under the revised cybersecurity law. 
❌