Reading view

Weather tracker: cyclones hit Australia and Madagascar and -40C cold snap in northern Europe

Western Australia and Madagascar struck by destructive winds and rain, while Finland and Norway have coldest January since 2010

Tropical Cyclone Mitchell hit the coast of Western Australia last week. It initially developed as a weak tropical low over the Northern Territory in early February, then tracked eastwards over Western Australia’s Kimberley region and eventually reached the Indian Ocean.

Fuelled by warm waters, Mitchell intensified into a tropical cyclone and moved south-west, hugging the coast of Western Australia and eventually deepened to a category three storm.

Continue reading...

© Photograph: Zoom Earth

© Photograph: Zoom Earth

© Photograph: Zoom Earth

  •  

Worn Down by Worry, Parents Look Longingly at Australia’s Social Media Ban

After the country barred children under 16 from using social media, many parents have been asking whether similarly tough action is needed in their own countries.

© Matthew Abbott for The New York Times

Students waiting for the bus in Sydney, Australia, in November. The country’s new law barring children from using social media has helped fuel emotional debate across the world.
  •  

Australia Kicks Kids Off Social Media + Is the A.I. Water Issue Fake? + Hard Fork Wrapped

“I’m told that Australian teens, in preparation for this ban, have been exchanging phone numbers with each other.”

© Photo Illustration by The New York Times; Photo: David Gray/Agence France-Presse — Getty Images

  •  

After Australia, Which Countries Could Be Next to Ban Social Media for Children

Governments are studying the decision to prohibit youths from using platforms like Facebook and TikTok as worries grow about the potential harm they cause.

© Ida Marie Odgaard/Ritzau Scanpix, via Agence France-Presse — Getty Images

Elementary school children in Denmark, which could become the first country in the European Union to impose an age limit on access to social media.
  •  

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”
  •  

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

Australian Social Media Ban Takes Effect as Kids Scramble for Alternatives

Australia’s world-first social media ban for children under age 16 takes effect on December 10, leaving kids scrambling for alternatives and the Australian government with the daunting task of enforcing the ambitious ban. What is the Australian social media ban, who and what services does it cover, and what steps can affected children take? We’ll cover all that, plus the compliance and enforcement challenges facing both social media companies and the Australian government – and the move toward similar bans in other parts of the world.

Australian Social Media Ban Supported by Most – But Not All

In September 2024, Prime Minister Anthony Albanese announced that his government would introduce legislation to set a minimum age requirement for social media because of concerns about the effect of social media on the mental health of children. The amendment to the Online Safety Act 2021 passed in November 2024 with the overwhelming support of the Australian Parliament. The measure has met with overwhelming support – even as most parents say they don’t plan to fully enforce the ban with their children. The law already faces a legal challenge from The Digital Freedom Project, and the Australian Financial Review reported that Reddit may file a challenge too. Services affected by the ban – which proponents call a social media “delay” – include the following 10 services:
  • Facebook
  • Instagram
  • Kick
  • Reddit
  • Snapchat
  • Threads
  • TikTok
  • Twitch
  • X
  • YouTube
Those services must take steps by Wednesday to remove accounts held by users under 16 in Australia and prevent children from registering new accounts. Many services began to comply before the Dec. 10 implementation date, although X had not yet communicated its policy to the government as of Dec. 9, according to The Guardian. Companies that fail to comply with the ban face fines of up to AUD $49.5 million, while there are no penalties for parents or children who fail to comply.

Opposition From a Wide Range of Groups - And Efforts Elsewhere

Opposition to the law has come from a range of groups, including those concerned about the privacy issues resulting from age verification processes such as facial recognition and assessment technology or use of government IDs. Others have said the ban could force children toward darker, less regulated platforms, and one group noted that children often reach out for mental health help on social media. Amnesty International also opposed the ban. The international human rights group called the ban “an ineffective quick fix that’s out of step with the realities of a generation that lives both on and offline.” Amnesty said strong regulation and safeguards would be a better solution. “The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” Amnesty said. “Robust safeguards are needed to ensure social media platforms stop exposing users to harms through their relentless pursuit of user engagement and exploitation of people’s personal data. “Many young people will no doubt find ways to avoid the restrictions,” the group added. “A ban simply means they will continue to be exposed to the same harms but in secret, leaving them at even greater risk.” Even the prestigious medical journal The Lancet suggested that a ban may be too blunt an instrument and that 16-year-olds will still face the same harmful content and risks. Jasmine Fardouly of the University of Sydney School of Psychology noted in a Lancet commentary that “Further government regulations and support for parents and children are needed to help make social media safe for all users while preserving its benefits.” Still, despite the chorus of concerns, the idea of a social media ban for children is catching on in other places, including the EU and Malaysia.

Australian Children Seek Alternatives as Compliance Challenges Loom

The Australian social media ban leaves open a range of options for under-16 users, among them Yope, Lemon8, Pinterest, Discord, WhatsApp, Messenger, iMessage, Signal, and communities that have been sources of controversy such as Telegram and 4chan. Users have exchanged phone numbers with friends and other users, and many have downloaded their personal data from apps where they’ll be losing access, including photos, videos, posts, comments, interactions and platform profile data. Many have investigated VPNs as a possible way around the ban, but a VPN is unlikely to work with an existing account that has already been identified as an underage Australian account. In the meantime, social media services face the daunting task of trying to confirm the age of account holders, a process that even Albanese has acknowledged “won’t be 100 per cent perfect.” There have already been reports of visual age checks failing, and a government-funded report released in August admitted the process will be imperfect. The government has published substantial guidance for helping social media companies comply with the law, but it will no doubt take time to determine what “reasonable steps” to comply look like. In the meantime, social media companies will have to navigate compliance guidance like the following passage: “Providers may choose to offer the option to end-users to provide government-issued identification or use the services of an accredited provider. However, if a provider wants to employ an age assurance method that requires the collection of government-issued identification, then the provider must always offer a reasonable alternative that doesn’t require the collection of government-issued identification. A provider can never require an end-user to give government-issued identification as the sole method of age assurance and must always give end-users an alternative choice if one of the age assurance options is to use government-issued identification. A provider also cannot implement an age assurance system which requires end-users to use the services of an accredited provider without providing the end-user with other choices.”  
  •  

Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

APT31, Australian Parliament, AI Safety Institute, National AI Plan

Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.

The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.

The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.

Dual Focus on Upstream Risks and Downstream Harms

The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.

Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.

The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.

Supporting Coordinated Regulatory Response

The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.

Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.

The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.

Addressing Specific AI Harms

The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.

The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.

The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.

Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.

Also read: CPA Australia Warns: AI Adoption Accelerates Cyber Risks for Australian Businesses

National Security and Crisis Response

The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.

AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.

The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

International Engagement

The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.

Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.

The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.

Also read: UK’s AI Safety Institute Establishes San Francisco Office for Global Expansion
  •