Normal view

Received yesterday — 12 December 2025

Australia’s social media ban launched with barely a hitch – but the real test is still to come

12 December 2025 at 07:00

The policy to cut off social media access for more than 2 million under-16s remains popular with Australians, while other countries look to follow suit

On the lawns of the prime minister’s Kirribilli residence in Sydney, overlooking the harbour, Anthony Albanese said he had never been prouder.

“This is a day in which my pride to be prime minister of Australia has never been greater. This is world-leading. This is Australia showing enough is enough,” he said as the country’s under-16s social media ban came into effect on Wednesday.

Continue reading...

© Composite: Victoria Hart/Guardian Design/Getty images

© Composite: Victoria Hart/Guardian Design/Getty images

© Composite: Victoria Hart/Guardian Design/Getty images

Received before yesterday

Amanda Seyfried says she will not apologise for calling Charlie Kirk ‘hateful’ after his shooting

11 December 2025 at 12:34

The Housemaid actor received backlash in September when she left a comment on Instagram after the rightwing activist was killed

The Housemaid star Amanda Seyfried has said she is “not fucking apologising” for describing Charlie Kirk as “hateful” after the latter was shot dead in September.

Seyfried was speaking to Who What Wear when she was asked about her social media activity, including the backlash around her Kirk comment. “I’m not fucking apologising for that. I mean, for fuck’s sake, I commented on one thing. I said something that was based on actual reality and actual footage and actual quotes. What I said was pretty damn factual, and I’m free to have an opinion, of course.”

Continue reading...

© Photograph: Dominik Bindl/Getty Images

© Photograph: Dominik Bindl/Getty Images

© Photograph: Dominik Bindl/Getty Images

Nick Clegg takes role at London-based venture capitalists Hiro Capital

10 December 2025 at 12:36

Former deputy prime minister, who left Meta this year, to be joined by Facebook-owner’s chief AI scientist

Nick Clegg is to add venture capitalist to his list of post-politics jobs, with the former British deputy prime minister and ex-senior executive at Meta taking on a new role at London-based Hiro Capital.

Clegg, who left his role as the Facebook-owner’s head of global affairs this year, is joining the European tech investment firm as a general partner.

Continue reading...

© Photograph: Kin Cheung/AP

© Photograph: Kin Cheung/AP

© Photograph: Kin Cheung/AP

Meta offers EU users ad-light option in push to end investigation

8 December 2025 at 09:57

Meta has agreed to make changes to its “pay or consent” business model in the EU, seeking to agree to a deal that avoids further regulatory fines at a time when the bloc’s digital rule book is drawing anger from US authorities.

On Tuesday, the European Commission announced that the social media giant had offered users an alternative choice of Facebook and Instagram services that would show them fewer personalized advertisements.

The offer follows an EU investigation into Meta’s policy of requiring users either to consent to data tracking or pay for an ad-free service. The Financial Times reported on optimism that an agreement could be reached between the parties in October.

Read full article

Comments

© Derick Hudson

Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps

4 December 2025 at 10:54
Google, Wiz, Cnapp, Exabeam, CNAPP, cloud threat, detections, threats, CNAP, severless architecture, itte Broadcom report cloud security threat

Security and developer teams are scrambling to address a highly critical security flaw in frameworks tied to the popular React JavaScript library. Not only is the vulnerability, which also is in the Next.js framework, easy to exploit, but React is widely used, including in 39% of cloud environments.

The post Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps appeared first on Security Boulevard.

Syntax hacking: Researchers discover sentence structure can bypass AI safety rules

2 December 2025 at 07:15

Researchers from MIT, Northeastern University, and Meta recently released a paper suggesting that large language models (LLMs) similar to those that power ChatGPT may sometimes prioritize sentence structure over meaning when answering questions. The findings reveal a weakness in how these models process instructions that may shed light on why some prompt injection or jailbreaking approaches work, though the researchers caution their analysis of some production models remains speculative since training data details of prominent commercial AI models are not publicly available.

The team, led by Chantal Shaib and Vinith M. Suriyakumar, tested this by asking models questions with preserved grammatical patterns but nonsensical words. For example, when prompted with “Quickly sit Paris clouded?” (mimicking the structure of “Where is Paris located?”), models still answered “France.”

This suggests models absorb both meaning and syntactic patterns, but can overrely on structural shortcuts when they strongly correlate with specific domains in training data, which sometimes allows patterns to override semantic understanding in edge cases. The team plans to present these findings at NeurIPS later this month.

Read full article

Comments

© EasternLightcraft via Getty Images

Meta boosts scam protection on WhatsApp and Messenger

23 October 2025 at 06:39

Vulnerable Facebook Messenger and WhatsApp users are getting more protection thanks to a move from the applications’ owner, Meta. The company has announced more safeguards to protect users (especially the elderly) from scammers.

The social media, publishing, and VR giant has added a new warning on WhatsApp that displays an alert when you share your screen during video calls with unknown contacts.

On Messenger, protection begins with on-device behavioral analysis, complemented by an optional cloud-based AI review that requires user consent. The on-device protection will flag suspicious messages from unknown accounts automatically. You then have the option to forward it to the cloud for further analysis (although note that this will likely break the default end-to-end encryption on that message, as Meta has to read it to understand the content). Meta’s AI service will then explain why the device interpreted the message as risky and what to do about it, offering information about common scams to provide context.

That context will be useful for vulnerable users, and it comes after Meta worked with researchers at social media analysis company Graphika to document online scam trends. Some of the scams it found included fake home remodeling services, and fraudulent government debt relief sites, both targeting seniors. There were also fake money recovery services offering to get scam victims’ funds back (which we’ve covered before).

Here’s a particularly sneaky scam that Meta identified: fake customer support scammers. These jerks monitor comments made under legitimate online accounts for airlines, travel agencies, and banks. They then contact the people who commented, impersonating customer support staff and persuading them to enter into direct message conversations or fill out Google Forms. Meta has removed over 21,000 Facebook pages impersonating customer support, it said.

A rising tide of scams

We can never have too many protections for vulnerable internet users, as scams continue to target them through messaging and social media apps. While scams target everyone (costing Americans $16.6 billion in losses, according to the FBI’s cybercrime unit IC3), those over 60 are hit especially hard. They lost $4.8 billion in 2024. Overall, losses from scams were up 33% across the board year-on-year.

Other common scams include “celebrity baiting”, which uses celebrity figures without their knowledge to dupe users into fraudulent schemes including investments and cryptocurrency. With deepfakes making it easier than ever to impersonate famous people, Meta has been testing facial recognition to help spot celebrity-bait ads for a year now, and recently announced plans to expand that initiative.

If you know someone less tech-savvy who uses Meta’s apps, encourage them to try these new protections—like Passkeys and Security Checkup. Passkeys let you log in using a fingerprint, face, or PIN, while Security Checkup guides you through steps to secure your account.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Meta ignored child sex abuse in VR, say whistleblowers

11 September 2025 at 17:49

Two former employees at Meta testified against the company at a Senate hearing this week, accusing it of downplaying the dangers of child abuse in its virtual reality (VR) environment.

The whistleblowers say they saw incidents where children were asked for sex acts and nude photos in Facebook’s VR world, which it calls the ‘metaverse’. This is a completely immersive world that people enter by wearing a Meta virtual reality headset. There, they are able to use a variety of apps that that surround them in 360-degree visuals. They can interact with the environment, and with other users.

At the hearing, held by the US Senate Judiciary Subcommittee on Privacy, Technology and the Law, the two former employees warned that Meta deliberately turned a blind eye to potential child harms. It restricted the information that researchers could collect about child safety and even altered research designs so that it could preserve plausible deniability, they said, adding that it also made researchers delete data that showed harm was being done to kids in VR.

“We researchers were directed how to write reports to limit risk to Meta,” said Jason Sattizahan, who researched integrity in Meta’s VR initiative during his six-year stint at the company. “Internal work groups were locked down, making it nearly impossible to share data and coordinate between teams to keep users safe. Mark Zuckerberg disparaged whistleblowers, claiming past disclosures were ‘used to construct a false narrative'”.

“When our research uncovered that underage children using Meta VR in Germany were subject to demands for sex acts, nude photos and other acts that no child should ever be exposed to, Meta demanded that we erase any evidence of such dangers that we saw,” continued Sattizahan. The company, which completely controlled his research, demanded that he change his methods to avoid collecting data on emotional and psychological harm, he said.

“Meta is aware that its VR platform is full of underage children,” said Cayce Savage, who led research on youth safety and virtual reality at Meta between 2019 and 2023. She added that recognizing this problem would force the company to kick them off the system, which would harm its engagement numbers. “Meta purposely turns a blind eye to this knowledge, despite it being obvious to anyone using their products.”

The dangers to children in VR are especially severe, Savage added, arguing that real-life physical movements made using the headsets and their controllers are required to affect the VR environment.

“Meta is aware that children are being harmed in VR. I quickly became aware that it is not uncommon for children in VR to experience bullying, sexual assault, to be solicited for nude photographs and sexual acts by pedophiles, and to be regularly exposed to mature content like gambling and violence, and to participate in adult experiences like strip clubs and watching pornography with strangers,” she said, adding that she had seen these things happening herself. “I wish I could tell you the percentage of children in VR experiencing these harms, but Meta would not allow me to conduct this research.”

In one case, abusers coordinated to set up a virtual strip club in the app Roblox and pay underage users the in-game currency, ‘Robux’, to have their avatars strip in the environment. Savage said she told Meta not to allow the app on its VR platform. “You can now download it in their app store,” she added.

This isn’t the first time that Meta has been accused of ignoring harm to children. In November 2023, a former employee warned that the company had ignored sexual dangers for children on Instagram, testifying that his own child had received unsolicited explicit pictures. In 2021, former employee Frances Haugen accused the company of downplaying risks to young users.

Facebook has reportedly referred to the “claims at the heart” of the hearing as “nonsense”.

Senator Marsha Blackburn, who chaired the meeting, has proposed the Kids Online Safety Act to force platforms into responsible design choices that would prevent harm to children.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Instagram Map: What is it and how do I control it?

18 August 2025 at 11:49

Instagram Map is a new feature—for Instagram, anyway—that users may have enabled without being fully aware of the consequences. The Map feature launched in the US on August 6, 2025, and is reportedly planned for a global rollout “soon.” As of mid-August 2025, not all users outside the US, especially in Europe, have received the feature yet. Community reports confirm that the rollout is happening in stages. Some users in Germany and other locations already have access, but many do not. It’s typical for Instagram features to take several weeks to reach all accounts and regions

Basically, Instagram Map allows you to share your current location with your friends. But, already, there’s the first caveat: Are all your Instagram “friends” real friends? As in, the kind that you’d like to run into whenever they feel like it?

Add to that the—for me–always nagging feeling that Meta will learn even more about you and your behavior, and you may want to change your initial choice.

If you have been careful in selecting your friends, then it’s fine—good for you! If not, you may want to narrow the group that can see your location down to “Close friends” or select a few that you trust. Or, you could consider turning sharing off completely.

What to do the first time you use Instagram Map

  1. Open Instagram and go to your Direct Messages (DM) inbox.
  2. Find and tap the Map icon at the top (near Notes or short posts section in your inbox).
  3. If you’ve never used Map before, you’ll get a prompt explaining how it works and asking for location access. Accept if you want to use it.
  4. When prompted, choose Who can see your location. Your choices:
    • Friends: Followers you follow back.
    • Close Friends: Your preselected Close Friends list.
    • Only these friends: Select specific people manually.
    • No one: Turn location sharing off entirely (still shows tagged posts).
  5. Select your preferred group and tap Share now.
Instagram Map share options
Instagram Map share options

How to make changes later

If you want to check your share settings or change them at a later point:

  1. Tap the Map feature in your DM inbox.
  2. Click the Settings icon (gear wheel) at the upper right.
  3. Choose the group to share with (Friends, Close Friends, Only these friends, or No one).
  4. Tap Done.

You can also add specific locations to a “Hidden Places” list so your real-time location never appears on the map when you visit those places. Here’s how:

  1. Open the Map feature via your DM inbox.
  2. Tap the Settings icon (gear wheel) at the top right.
  3. Tap the three-dot menu in the top corner of the settings menu.
  4. Drag a pin on the map to mark a place you want hidden.
  5. Use the slider to set a radius, which determines how wide and large the hidden zone is.
  6. Type in the name of the place and tap Done.

Sharing your location on Instagram Map is not enabled unless you actively choose to share it. What will be there are any posts that have a location tagged in them, something that’s an option every time you add photos and videos to your Stories or your grid. So, regardless of whether you choose to share your location, you can use the map to explore location-based content.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Meta accessed women’s health data from Flo app without consent, says court

7 August 2025 at 06:19

A jury has ruled that Meta accessed sensitive information from a woman’s reproductive health tracking app without consent.

The app in question is called Flo Health. Developed in 2015 in Belarus to track menstrual cycles, it has evolved over the years as a tracking app for highly detailed, intimate aspects of women’s reproductive health.

Flo Health user Erica Frasco bought a class action lawsuit against the company in 2021, following a damning report about its privacy infractions by the Wall Street Journal in 2019.

Since she downloaded the app in 2017, Frasco, like its other users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.

According to the complaint, Flo Health promised not to share this data with third parties unless it was necessary for the provision of its services. Even then, it would not only share information relevant to web hosting and app development, it promised. It would not include “information regarding your marked cycles, pregnancy, symptoms, notes and other information entered by [users]”, reported the original complaint.

Yet between 2016 and 2019 Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.

Flo Health didn’t impose rules on how these third parties could use the data. “In fact, the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,” the complaint went on.

By December 2020, 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.

Users were “trusting us with intimate personal information,” it said in its privacy policy. “We are committed to keeping that trust, which is why our policy as a company is to take every step to ensure that individual user’s data and privacy rights are protected.”

The Federal Trade Commission investigated these allegations and settled with Flo Health in 2021, imposing an independent review of its privacy policy and mandating that it not misrepresent its app.

The class action lawsuit claims common law invasion of privacy, breach of contract and implied contract, unjust enrichment, and breach of the Stored Communications Act and the California Confidentiality of Medical Information Act. It seeks damages for plaintiffs, along with some of the company’s profit.

Google and Flo Health have both settled with plaintiffs already, but Meta has not. The jury ruled that Meta intentionally “eavesdropped on and/or recorded their conversations by using an electronic device,” and that it did so without consent.

This case is important on so many levels. Aside from general privacy concerns, women’s menstrual health is an area of particular contention after the US Supreme Court removed the constitutional right to abortion in June 2022. That year, Meta came under scrutiny for providing police with private message data between a mother and her daughter planning medication to abort a pregnancy.

We could simply say “Don’t use Flo Health”, but the app was trusted until it was found out. How many others are sharing data in similarly irresponsible ways? Increasingly, we lean toward simply not using apps to track sensitive data of this kind at all.

However, then there are the websites to worry about. A report by Propublica found that online pharmacies selling abortion pills were sharing sensitive data with Google and others. This could give law enforcement evidence in cases against women, it said. Technology promised us convenience, but its misuse also brings serious dangers to users.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

❌