Normal view

Received yesterday — 12 December 2025

How to break free from smart TV ads and tracking

12 December 2025 at 07:30

Smart TVs can feel like a dumb choice if you’re looking for privacy, reliability, and simplicity.

Today’s TVs and streaming sticks are usually loaded up with advertisements and user tracking, making offline TVs seem very attractive. But ever since smart TV operating systems began making money, “dumb” TVs have been hard to find.

In response, we created this non-smart TV guide that includes much more than dumb TVs. Since non-smart TVs are so rare, this guide also breaks down additional ways to watch TV and movies online and locally without dealing with smart TVs’ evolution toward software-centric features and snooping. We’ll discuss a range of options suitable for various budgets, different experience levels, and different rooms in your home.

Read full article

Comments

© Aurich Lawson | Getty Images

Received before yesterday

Win hardware, collectibles, and more in the 2025 Ars Technica Charity Drive

10 December 2025 at 07:30

It’s once again that special time of year when we give you a chance to do well by doing good. That’s right—it’s the 2025 edition of our annual Charity Drive!

Every year since 2007, we’ve encouraged readers to give to Penny Arcade’s Child’s Play charity, which provides toys and games to kids being treated in hospitals around the world. In recent years, we’ve added the Electronic Frontier Foundation to our charity push, aiding in their efforts to defend Internet freedom. This year, as always, we’re providing some extra incentive for those donations by offering donors a chance to win pieces of our big pile of vendor-provided swag. We can’t keep it, and we don’t want it clogging up our offices, so it’s now yours to win.

This year’s swag pile is full of high-value geek goodies. We have over a dozen prizes valued at nearly $5,000 total, including gaming hardware and collectibles, apparel, and more. In 2023, Ars readers raised nearly $40,000 for charity, contributing to a total haul of more than $542,000 since 2007. We want to raise even more this year, and we can do it if readers dig deep.

Read full article

Comments

© Kyle Orland

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

Coupang CEO Resigns After Massive Data Breach Exposes Millions of Users

10 December 2025 at 02:42

Coupang CEO Resigns

Coupang CEO Resigns, a headline many in South Korea expected, but still signals a major moment for the country’s tech and e-commerce landscape. Coupang Corp. confirmed on Wednesday that its CEO, Park Dae-jun, has stepped down following a massive Coupang data breach that exposed the personal information of 33.7 million people, almost two-thirds of the country. Park said he was “deeply sorry” for the incident and accepted responsibility both for the breach and for the company’s response. His exit, while formally described as a resignation, is widely seen as a forced departure given the scale of the fallout and growing anger among customers and regulators. To stabilize the company, Coupang’s U.S. parent, Coupang Inc., has appointed Harold Rogers, its chief administrative officer and general counsel, as interim CEO. The parent company said the leadership change aims to strengthen crisis management and ease customer concerns.

What Happened in the Coupang Data Breach

The company clarified that the latest notice relates to the previously disclosed incident on November 29 and that no new leak has occurred. According to Coupang’s ongoing investigation, the leaked information includes:
  • Customer names and email addresses
  • Full shipping address book details, such as names, phone numbers, addresses, and apartment entrance access codes
  • Portions of the order information
Coupang emphasized that payment details, passwords, banking information, and customs clearance codes were not compromised. As soon as it identified the leak, the company blocked abnormal access routes and tightened internal monitoring. It is now working closely with the Ministry of Science and ICT, the National Police Agency, the Personal Information Protection Commission (PIPC), the Korea Internet & Security Agency (KISA), and the Financial Supervisory Service.

Phishing, Smishing, and Impersonation Alerts

Coupang warned customers to be extra cautious as leaked data can fuel impersonation scams. The company reminded users that:
  • Coupang never asks customers to install apps via phone or text.
  • Unknown links in messages should not be opened.
  • Suspicious communications should be reported to 112 or the Financial Supervisory Service.
  • Customers must verify messages using Coupang’s official customer service numbers.
Users who stored apartment entrance codes in their delivery address book were also urged to change them immediately. The company also clarified that delivery drivers rarely call customers unless necessary to access a building or resolve a pickup issue, a small detail meant to help people recognize potential scam attempts.

Coupang CEO Resigns as South Korea Toughens Cyber Rules

The departure of CEO Park comes at a time when South Korea is rethinking how corporations respond to data breaches. The government’s 2025 Comprehensive National Cybersecurity Strategy puts direct responsibility on CEOs for major security incidents. It also expands CISOs' authority, strengthens IT asset management requirements, and gives chief privacy officers greater influence over security budgets. This shift follows other serious breaches, including SK Telecom’s leak of 23 million user records, which led to a record 134.8 billion won fine. Regulators are now considering fines of up to 1.2 trillion won for Coupang, roughly 3% of its annual sales, under the Personal Information Protection Act. The company also risks losing its ISMS-P certification, a possibility unprecedented for a business of its size.

Industry Scramble After a Coupang Data Breach of This Scale

A Coupang Data breach affecting tens of millions of people has sent shockwaves across South Korea’s corporate sector. Authorities have launched emergency inspections of 1,600 ISMS-certified companies and begun unannounced penetration tests. Security vendors say Korean companies are urgently adding multi-factor authentication, AI-based anomaly detection, insider threat monitoring, and stronger access controls. Police naming a former Chinese Coupang employee as a suspect has intensified focus on insider risk. Government agencies, including the National Intelligence Service, are also working with private partners to shorten cyber-incident analysis times from 14 days to 5 days using advanced AI forensic labs.

Looking Ahead

With the Coupang CEO's resignation development now shaping the company’s crisis trajectory, Coupang faces a long road to rebuilding trust among users and regulators. The company says its teams are working to resolve customer concerns quickly, but the broader lesson is clear: cybersecurity failures now carry real consequences, including at the highest levels of leadership.

NCSC Warns Prompt Injection Could Become the Next Major AI Security Crisis

9 December 2025 at 01:07

Prompt Injection

The UK’s National Cyber Security Centre (NCSC) has issued a fresh warning about the growing threat of prompt injection, a vulnerability that has quickly become one of the biggest security concerns in generative AI systems. First identified in 2022, prompt injection refers to attempts by attackers to manipulate large language models (LLMs) by inserting rogue instructions into user-supplied content. While the technique may appear similar to the long-familiar SQL injection flaw, the NCSC stresses that comparing the two is not only misleading but potentially harmful if organisations rely on the wrong mitigation strategies.

Why Prompt Injection Is Fundamentally Different

SQL injection has been understood for nearly three decades. Its core issue, blurring the boundary between data and executable instructions, has well-established fixes such as parameterised queries. These protections work because traditional systems draw a clear distinction between “data” and “instructions.” The NCSC explains that LLMs do not operate in the same way. Under the hood, a model doesn’t differentiate between a developer’s instruction and a user’s input; it simply predicts the most likely next token. This makes it inherently difficult to enforce any security boundary inside a prompt. In one common example of indirect prompt injection, a candidate’s CV might include hidden text instructing a recruitment AI to override previous rules and approve the applicant. Because an LLM treats all text the same, it can mistakenly follow the malicious instruction. This, according to the NCSC, is why prompt injection attacks consistently appear in deployed AI systems and why they are ranked as OWASP’s top risk for generative AI applications.

Treating LLMs as an ‘Inherently Confusable Deputy’

Rather than viewing prompt injection as another flavour of classic code injection, the NCSC recommends assessing it through the lens of a confused deputy problem. In such vulnerabilities, a trusted system is tricked into performing actions on behalf of an untrusted party. Traditional confused deputy issues can be patched. But LLMs, the NCSC argues, are “inherently confusable.” No matter how many filters or detection layers developers add, the underlying architecture still offers attackers opportunities to manipulate outputs. The goal, therefore, is not complete elimination of risk, but reducing the likelihood and impact of attacks.

Key Steps to Building More Secure AI Systems

The NCSC outlines several principles aligned with the ETSI baseline cybersecurity standard for AI systems: 1. Raise Developer and Organisational Awareness Prompt injection remains poorly understood, even among seasoned engineers. Teams building AI-connected systems must recognise it as an unavoidable risk. Security teams, too, must understand that no product can completely block these attacks; risk has to be managed through careful design and operational controls. 2. Prioritise Secure System Design Because LLMs can be coerced into using external tools or APIs, designers must assume they are manipulable from the outset. A compromised prompt could lead an AI assistant to trigger high-privilege actions, effectively handing those tools to an attacker. Researchers at Google, ETH Zurich, and independent security experts have proposed architectures that constrain the LLM’s authority. One widely discussed principle: if an LLM processes external content, its privileges should drop to match the privileges of that external party. 3. Make Attacks Harder to Execute Developers can experiment with techniques that separate “data” from expected “instructions”, for example, wrapping external input in XML tags. Microsoft’s early research shows these techniques can raise the barrier for attackers, though none guarantee total protection. The NCSC warns against simple deny-listing phrases such as “ignore previous instructions,” since attackers can easily rephrase commands. 4. Implement Robust Monitoring A well-designed system should log full inputs, outputs, tool integrations, and failed API calls. Because attackers often refine their attempts over time, early anomalies, like repeated failed tool calls, may provide the first signs of an emerging attack.

A Warning for the AI Adoption Wave

The NCSC concludes that relying on SQL-style mitigations would be a serious mistake. SQL injection saw its peak in the early 2010s after widespread adoption of database-driven applications. It wasn’t until years of breaches and data leaks that secure defaults finally became standard. With generative AI rapidly embedding itself into business workflows, the agency warns that a similar wave of exploitation could occur, unless organisations design systems with prompt injection risks front and center.

Please send help. I can’t stop playing these roguelikes.

8 December 2025 at 07:00

It’s time to admit, before God and the good readers of Ars Technica, that I have a problem. I love roguelikes. Reader, I can’t get enough of them. If there’s even a whisper of a hot new roguelike on Steam, I’m there. You may call them arcane, repetitive, or maddeningly difficult; I call them heaven.

The second best part of video games is taking a puny little character and, over 100 hours, transforming that adventurer into a god of destruction. The best thing about video games is doing the same thing in under an hour. Beat a combat encounter, get an upgrade. Enter a new area, choose a new item. Put together a build and watch it sing.

If you die—immediately ending your ascent and returning you to the beginning of the game—you’ll often make a pit stop at a home base to unlock new goodies to help you on your next run. (Some people distiguish between roguelikes and “roguelites,” with the latter including permanent, between-run upgrades. For simplicity’s sake, I’ll use “roguelike” as an umbrella term).

Read full article

Comments

© Supergiant Games

A massive, Chinese-backed port could push the Amazon Rainforest over the edge

6 December 2025 at 07:30

CHANCAY, Peru—The elevator doors leading to the fifth-floor control center open like stage curtains onto a theater-sized screen.

This “Operations Productivity Dashboard” instantaneously displays a battery of data: vehicle locations, shipping times, entry times, loading data, unloading data, efficiency statistics.

Most striking, though, are the bold lines arcing over the dashboard’s deep-blue Pacific—digital streaks illustrating the routes that lead thousands of miles across the ocean, from this unassuming city, to Asia’s biggest ports.

Read full article

Comments

© Hidalgo Calatayud Espinoza/picture alliance via Getty Images

The NPU in your phone keeps improving—why isn’t that making AI better?

4 December 2025 at 07:00

Almost every technological innovation of the past several years has been laser-focused on one thing: generative AI. Many of these supposedly revolutionary systems run on big, expensive servers in a data center somewhere, but at the same time, chipmakers are crowing about the power of the neural processing units (NPU) they have brought to consumer devices. Every few months, it’s the same thing: This new NPU is 30 or 40 percent faster than the last one. That’s supposed to let you do something important, but no one really gets around to explaining what that is.

Experts envision a future of secure, personal AI tools with on-device intelligence, but does that match the reality of the AI boom? AI on the “edge” sounds great, but almost every AI tool of consequence is running in the cloud. So what’s that chip in your phone even doing?

What is an NPU?

Companies launching a new product often get bogged down in superlatives and vague marketing speak, so they do a poor job of explaining technical details. It’s not clear to most people buying a phone why they need the hardware to run AI workloads, and the supposed benefits are largely theoretical.

Read full article

Comments

© Aurich Lawson | Getty Images

“Players are selfish”: Fallout 2’s Chris Avellone describes his game design philosophy

2 December 2025 at 10:04

Chris Avellone wants you to have a good time.

People often ask creatives—especially those in careers some dream of entering—”how did you get started?” Video game designers are no exception, and Avellone says that one of the most important keys to his success was one he learned early in his origin story.

“Players are selfish,” Avellone said, reflecting on his time designing the seminal computer roleplaying game Planescape: Torment. “The more you can make the experience all about them, the better. So Torment became that. Almost every single thing in the game is about you, the player.”

Read full article

Comments

© Chris Avellone

After 40 years of adventure games, Ron Gilbert pivots to outrunning Death

1 December 2025 at 07:00

If you know the name Ron Gilbert, it’s probably for his decades of work on classic point-and-click adventure games like Maniac Mansion, Indiana Jones and the Last Crusade, the Monkey Island series, and Thimbleweed Park. Given that pedigree, October’s release of the Gilbert-designed Death by Scrolling—a rogue-lite action-survival pseudo-shoot-em-up—might have come as a bit of a surprise.

In an interview from his New Zealand home, though, Gilbert noted that his catalog also includes some reflex-based games—Humungous Entertainment’s Backyard Sports titles and 2010’s Deathspank, for instance. And Gilbert said his return to action-oriented game design today stemmed from his love for modern classics like Binding of Isaac, Nuclear Throne, and Dead Cells.

“I mean, I’m certainly mostly known for adventure games, and I have done other stuff, [but] it probably is a little bit of a departure for me,” he told Ars. “While I do enjoy playing narrative games as well, it’s not the only thing I enjoy, and just the idea of making one of these kind of started out as a whim.”

Read full article

Comments

© Getty Images

We put the new pocket-size vinyl format to the test—with mixed results

28 November 2025 at 07:00

We recently looked at Tiny vinyl, a new miniature vinyl single format developed through a collaboration between a toy industry veteran and the world’s largest vinyl record manufacturer. The 4-inch singles are pressed in a process nearly identical to standard 12-inch LPs or 7-inch singles, except everything is smaller. They have a standard-size spindle hole and play at 33⅓ RPM, and they hold up to four minutes of music per side.

Several smaller bands, like The Band Loula and Rainbow Kitten Surprise, and some industry veterans like Blake Shelton and Melissa Etheridge, have already experimented with the format. But Tiny Vinyl partnered with US retail giant Target for its big coming-out party this fall, with 44 exclusive titles launching throughout the end of this year.

Tiny Vinyl supplied a few promotional copies of releases from former America’s Got Talent finalist Grace VanderWaal, The Band Loula, country pop stars Florida Georgia Line, and jazz legends the Vince Guaraldi Trio so I could get a first-hand look at how the records actually play. I tested these titles as well as several others I picked up at retail, playing them on an Audio Technica LP-120 direct drive manual turntable connected to a Yamaha S-301 integrated amplifier and playing through a pair of vintage Klipsch kg4 speakers.

Read full article

Comments

© Chris Foresman

Vision Pro M5 review: It’s time for Apple to make some tough choices

26 November 2025 at 12:00

With the recent releases of visionOS 26 and newly refreshed Vision Pro hardware, it’s an ideal time to check in on Apple’s Vision Pro headset—a device I was simultaneously amazed and disappointed by when it launched in early 2024.

I still like the Vision Pro, but I can tell it’s hanging on by a thread. Content is light, developer support is tepid, and while Apple has taken action to improve both, it’s not enough, and I’m concerned it might be too late.

When I got a Vision Pro, I used it a lot: I watched movies on planes and in hotel rooms, I walked around my house placing application windows and testing out weird new ways of working. I tried all the neat games and educational apps, and I watched all the immersive videos I could get ahold of. I even tried my hand at developing my own applications for it.

Read full article

Comments

© Samuel Axon

“Go generate a bridge and jump off it”: How video pros are navigating AI

24 November 2025 at 07:00

In 2016, the legendary Japanese filmmaker Hayao Miyazaki was shown a bizarre AI-generated video of a misshapen human body crawling across a floor.

Miyazaki declared himself “utterly disgusted” by the technology demo, which he considered an “insult to life itself.”

“If you really want to make creepy stuff, you can go ahead and do it,” Miyazaki said. “I would never wish to incorporate this technology into my work at all.”

Read full article

Comments

© Aurich Lawson | Getty Images

Black Friday Cybersecurity Survival Guide: Protect Yourself from Scams & Attacks

24 November 2025 at 07:38

Black Friday

Black Friday has evolved into one of the most attractive periods of the year, not just for retailers, but for cybercriminals too. As shoppers rush to grab limited-time deals, attackers exploit the surge in online activity through malware campaigns, phishing scams, payment fraud, and impersonation attacks. With threat actors using increasingly advanced methods, understanding the risks is essential for both shoppers and businesses preparing for peak traffic. This cybersecurity survival guide breaks down the most common Black Friday threats and offers practical steps to stay secure in 2025’s high-risk threat landscape.

Why Black Friday Is a Goldmine for Cybercriminals

Black Friday and Cyber Monday trigger massive spikes in online transactions, email promotions, digital ads, and account logins. This high-volume environment creates the perfect disguise for malicious activity. Attackers know users are expecting deal notifications, promo codes, and delivery updates, making them more likely to click without verifying legitimacy. Retailers also face increased pressure to scale infrastructure quickly, often introducing misconfigurations or security gaps that cybercriminals actively look for.

Common Black Friday Cyber Threats

Black Friday Cybersecurity Survival Guide
  1. Phishing & Fake Deal Emails: Cybercriminals frequently impersonate major retailers to push “exclusive” deals or false order alerts. These emails often contain malicious links aimed at stealing login credentials or credit card data.
  1. Malware Hidden in Apps and Ads: Fake shopping apps and malicious ads spread rapidly during Black Friday.
  1. Fake Retail Websites: Dozens of cloned websites appear each year, mimicking popular brands with nearly identical designs. These sites exist solely to steal payment information or personal data.
  1. Payment Card Fraud & Credential Stuffing: With billions of login attempts occurring during Black Friday, attackers exploit weak or reused passwords to take over retail accounts, redeem loyalty points, or make fraudulent purchases.
  1. Marketplace Scams: Fraudulent sellers on marketplaces offer unrealistic discounts, harvest information, and often never deliver the product. Some also use sophisticated social engineering tactics to manipulate buyers.

Cybersecurity Tips for Shoppers

  • Verify Before You Click: Check URLs, sender domains, and website certificates. Avoid clicking on deal links from emails or messages.
  • Enable Multi-Factor Authentication (MFA): MFA prevents unauthorized access even if an attacker steals your password.
  • Avoid Public Wi-Fi: Unsecured networks can expose your transactions. Use mobile data or a VPN.
  • Use Secure Payment Options: Virtual cards and digital wallets limit your exposure during a breach.
  • Download Apps Only from Official Stores: Stay away from third-party downloads or promo apps not approved by Google or Apple.
Best Practices for Retailers
  • Strengthen Threat Detection & Monitoring: Retailers must monitor unusual login behavior, bot traffic, and transaction spikes. Cyble’s Attack Surface and Threat Intelligence solutions help businesses identify fake domains, phishing lures, and malware campaigns targeting their brand.
  • Secure Payment Infrastructure: Ensure payment systems are PCI-compliant, updated, and protected from card-skimming malware.
  • Educate Customers: Proactively notify customers about known scams and impersonation risks, especially during high-traffic sales periods.
With malware, phishing, and fraud attempts rising sharply during the shopping season, awareness and proactive defense are essential. By staying vigilant and leveraging trusted cybersecurity tools, both shoppers and businesses can navigate Black Friday securely. See how Cyble protects retailers during high-risk shopping seasons. Book your free 20-minute demo now.

Stoke Space goes for broke to solve the only launch problem that “moves the needle”

21 November 2025 at 07:00

LAUNCH COMPLEX 14, Cape Canaveral, Fla.—The platform atop the hulking steel tower offered a sweeping view of Florida’s rich, sandy coastline and brilliant blue waves beyond. Yet as captivating as the vista might be for an aspiring rocket magnate like Andy Lapsa, it also had to be a little intimidating.

To his right, at Launch Complex 13 next door, a recently returned Falcon 9 booster stood on a landing pad. SpaceX has landed more than 500 large orbital rockets. And next to SpaceX sprawled the launch site operated by Blue Origin. Its massive New Glenn rocket is also reusable, and founder Jeff Bezos has invested tens of billions of dollars into the venture.

Looking to the left, Lapsa saw a graveyard of sorts for commercial startups. Launch Complex 15 was leased to a promising startup, ABL Space, two years ago. After two failed launches, ABL Space pivoted away from commercial launch. Just beyond lies Launch Complex 16, where Relativity Space aims to launch from. The company has already burned through $1.7 billion in its efforts to reach orbit. Had billionaire Eric Schmidt not stepped in earlier this year, Relativity would have gone bankrupt.

Read full article

Comments

© Stoke Space

GenAI Is Everywhere—Here’s How to Stay Cyber-Ready

21 November 2025 at 02:56

Cyber Resilience

By Kannan Srinivasan, Business Head – Cybersecurity, Happiest Minds Technologies Cyber resilience means being prepared for anything that might disrupt your systems. It’s about knowing how to get ready, prevent problems, recover quickly, and adapt when a cyber incident occurs. Generative AI, or GenAI, has become a big part of how many organizations work today. About 70% of industries are already using it, and over 95% of US companies have adopted it in some form. GenAI is now supporting nearly every area, including IT, finance, legal, and marketing. It even helps doctors make faster decisions, students learn more effectively, and shoppers find better deals. But what happens if GenAI breaks, gets messed up, or stops working? Once AI is part of your business, you need a stronger plan to stay safe and steady. Here are some simple ways organizations can build their cyber resilience in this AI-driven world.

A Practical Guide to Cyber Resilience in the GenAI Era

  1. Get Leadership and the Board on Board

Leading the way in cyber resilience starts with your leaders. Keep your board and senior managers in the loop about the risks that come with GenAI. Get their support, make sure it lines up with your business goals, and secure enough budget for safety measures and training. Make talking about cyber safety a regular part of your meetings.
  1. Know Where GenAI Is Being Used

Make a list of all departments and processes using GenAI. Note which models you're using, who manages them, and what they’re used for. Then, do a quick risk check—what could happen if a system goes down? This helps you understand the risks and prepare better backup plans.
  1. Check for Weak Spots Regularly

Follow trusted guidelines like OWASP for testing your GenAI systems. Regular checks can spot issues like data leaks or misuse early. Fix problems quickly to stay ahead of potential risks.
  1. Improve Threat Detection and Response

Use security tools that keep an eye on your GenAI systems all the time. These tools should spot unusual activity, prevent data loss, and help investigate when something goes wrong. Make sure your cybersecurity team is trained and ready to act fast.
  1. Use More Than One AI Model

Don’t rely on just one AI tool. Having multiple models from different providers helps keep things running smoothly if one faces problems. For example, if you’re using OpenAI, consider adding options like Anthropic Claude or Google Gemini as backups. Decide which one is your main and which ones are backups.
  1. Update Your Incident Plans

Review and update your plans for dealing with incidents to include GenAI, making sure they meet new rules like the EU AI Act. Once done, test them with drills so everyone knows what to do in a real emergency.

Conclusion

Cyber resilience in the GenAI era is a continuous process. As AI grows, the need for stronger governance, smarter controls, and proactive planning grows with it. Organizations that stay aware, adaptable, and consistent in their approach will continue to build trust and reliability. GenAI opens doors to efficiency and creativity, and resilience ensures that progress stays uninterrupted. The future belongs to those who stay ready, informed, and confident in how they manage technology.

[Correction] Gmail can read your emails and attachments to power “smart features”

20 November 2025 at 08:48

Update November 22. We’ve updated this article after realising we contributed to a perfect storm of misunderstanding around a recent change in the wording and placement of Gmail’s smart features. The settings themselves aren’t new, but the way Google recently rewrote and surfaced them led a lot of people (including us) to believe Gmail content might be used to train Google’s AI models, and that users were being opted in automatically. After taking a closer look at Google’s documentation and reviewing other reporting, that doesn’t appear to be the case.

Gmail does scan email content to power its own “smart features,” such as spam filtering, categorisation, and writing suggestions. But this is part of how Gmail normally works and isn’t the same as training Google’s generative AI models. Google also maintains that these feature settings are opt-in rather than opt-out, although users’ experiences seem to vary depending on when and how the new wording appeared.

It’s easy to see where the confusion came from. Google’s updated language around “smart features” is vague, and the term “smart” often implies AI—especially at a time when Gemini is being integrated into other parts of Google’s products. When the new wording started appearing for some users without much explanation, many assumed it signalled a broader shift. It’s also come around the same time as a proposed class-action lawsuit in the state of California, which, according to Bloomberg, alleges that Google gave Gemini AI access to Gmail, Chat, and Meet without proper user consent.

We’ve revised this article to reflect what we can confirm from Google’s documentation, as it’s always been our aim to give readers accurate, helpful guidance.


Google has updated some Gmail settings around how its “smart features” work, which control how Gmail analyses your messages to power built-in functions.

According to reports we’ve seen, Google has started automatically opting users in to allow Gmail to access all private messages and attachments for its smart features. This means your emails are analyzed to improve your experience with Chat, Meet, Drive, Email and Calendar products. However, some users are now reporting that these settings are switched on by default instead of asking for explicit opt-in—although Google’s help page states that users are opted-out for default.

How to check your settings

Opting in or out requires you to change settings in two places, so I’ve tried to make it as easy to follow as possible. Feel free to let me know in the comments if I missed anything.

To fully opt out, you must turn off Gmail’s smart features in two separate locations in your settings. Don’t miss one, or AI training may continue.

Step 1: Turn off Smart features in Gmail, Chat, and Meet settings

  • Open Gmail on your desktop or mobile app.
  • Click the gear icon → See all settings (desktop) or Menu → Settings (mobile).
  • Find the section called smart features in Gmail, Chat, and Meet. You’ll need to scroll down quite a bit.
Smart features settings
  • Uncheck this option.
  • Scroll down and hit Save changes if on desktop.

Step 2: Turn off Google Workspace smart features

  • Still in Settings, locate Google Workspace smart features.
  • Click on Manage Workspace smart feature settings.
  • You’ll see two options: Smart features in Google Workspace and Smart features in other Google products.
Smart feature settings

  • Toggle both off.
  • Save again in this screen.

Step 3: Verify if both are off

  • Make sure both toggles remain off.
  • Refresh your Gmail app or sign out and back in to confirm changes.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Fake Deals, Fake Stores, Real Losses: Black Friday Scams Hit Record High

20 November 2025 at 02:14

Black Friday sale scams

As Black Friday sale scams continue to rise, shoppers across Europe and the US are being urged to stay vigilant this festive season. With promotions kicking off earlier than ever, some starting as early as October 30 in Romania, cybercriminals have had an extended window to target bargain hunters, exploiting their search for deals with fraudulent schemes. Black Friday 2025, this year, scammers have been impersonating top brands such as Amazon, MediaMarkt, TEMU, IKEA, Kaufland, Grohe, Oral-B, Binance, Louis Vuitton, Jack Daniel’s, Reese’s, and United Healthcare. Among them, Amazon remains the most frequently abused brand, appearing in phishing messages, fake coupon offers, and mobile scams promising massive discounts.

Amid these ongoing threats, many shoppers are also expressing frustration with deceptive pricing tactics seen during the Black Friday period. One Reddit user described the experience as increasingly misleading:

“I'm officially over the Black Friday hype. It used to feel like a sale, now it feels like a prank.

I was tracking a coffee machine at $129. When the ‘Black Friday early deal’ showed up, it became ‘$159 now $139 LIMITED TIME.’ I saw $129 two weeks ago. The kids’ tablet went from $79 to $89 with a Holiday Deal tag — paying extra for a yellow label.

I've been doing Black Friday hunting for 10+ years and it's only gotten worse. Fake doorbusters, fake urgency, fake ‘original’ prices. Feels like they're A/B testing how cooked our brains are as long as the button screams ‘53% OFF.’

Now I only buy when needed and let a Chrome extension track my Amazon orders. It clawed back $72 last month from so-called ‘preview pricing’ after prices dropped again.”

This sentiment reflects a growing concern: while scam campaigns imitate trusted brands, the pressure-driven marketing tactics surrounding Black Friday can also make consumers more vulnerable to fraud.

Black Friday sale scams

Moreover, a recent campaign even spoofed United Healthcare, offering a fake “Black Friday Smile Upgrade” with Oral-B dental kits, aiming to collect sensitive personal data. According to data from the City of London Police, shoppers lost around £11.8 million to online shopping fraud during last year’s festive season, from 1 November 2024 to 31 January 2025. Fraudsters often pressure victims with claims that deals are limited or products are scarce, forcing hurried decisions that can result in stolen funds or sensitive information.

A Month-Long Shopping Season Means More Risk

With strong discounts across electronics, toys, apparel, and home goods, consumers are drawn to higher-ticket items. This year, electronics saw discounts up to 30.1%, toys 28%, apparel 23.2%, and furniture 19%, while televisions, appliances, and sporting goods hit record lows in price, prompting significant e-commerce growth. Adobe reported that for every 1% decrease in price, demand increased by 1.029% compared to the previous year, driving an additional $2.25 billion in online spending, a part of the overall $241.4 billion spent online. The combination of high consumer demand and deep discounts makes the Black Friday shopping period especially attractive to cybercriminals, as the increased volume of online transactions offers more opportunities for scams.

How to Protect Yourself from Black Friday Sale Scams

Ahead of Black Friday on November 28, shoppers are being encouraged to follow advice from the Stop! Think Fraud campaign, run by the Home Office and the National Cyber Security Centre (NCSC). Key precautions include:
  • Check the shop is legitimate: Always verify reviews on trusted websites before making a purchase.
  • Secure your accounts: Enable two-step verification (2SV) for important accounts to add an extra layer of security.
  • Pay securely: Use credit cards or verified payment services like PayPal, Apple Pay, or Google Pay. Avoid storing card details on websites and never pay by direct bank transfer.
  • Beware of delivery scams: Avoid clicking links in unexpected messages or calls and confirm any delivery claims with the organization directly.
Individuals are also urged to report suspicious emails, texts, or fake websites to the NCSC, which collaborates with partners to investigate and remove malicious content. For businesses and security-conscious shoppers, leveraging tools like Cyble’s Cyber Threat Intelligence Platform can help monitor brand impersonation, detect scams, and protect sensitive data in real-time during Black Friday sale scams. With the rise of cyber threats during high-demand shopping periods, proactive intelligence is key to staying safe. Stay alert this Black Friday, your bargains are only valuable if your personal data stays safe. Learn more about how Cyble can protect you and your business here.

The Hidden Cost of Vulnerability Backlogs—And How to Eliminate Them

19 November 2025 at 00:26

Vulnerability Backlogs

Striving for digital transformation, organizations are innovating at an incredibly fast pace. They deploy new applications, services, and platforms daily, creating great opportunities for growth and efficiency. However, this speedy transformation comes with a significant, often overlooked, consequence: an accumulated massive vulnerability backlog. This ever-expanding list of unpatched software flaws, system misconfigurations, and coding errors is a silent drain on an organization's most valuable resources.  For many IT and security teams, the vulnerability backlog is a source of constant pressure and a seemingly unwinnable battle. As soon as they deploy one batch of patches, a new wave of critical vulnerabilities is disclosed.   This reactive cybersecurity approach is both unsustainable and incredibly costly. The true price of a vulnerability backlog extends far beyond the person-hours spent on patching. It manifests as operational friction, stifled innovation, employee burnout, and a persistent, elevated risk of a catastrophic cyberattack  To truly secure the modern enterprise, leaders must look beyond traditional scanning and patching cycles and embrace a new, proactive paradigm for vulnerability management. 

The Anatomy of a Swelling Vulnerability Backlog

A vulnerability backlog is the aggregate of all known but unaddressed security weaknesses within an organization’s IT environment. These weaknesses can range from critical flaws in open-source libraries and commercial software to misconfigured cloud services and insecure code pushed during quick development cycles.  There are three principal reasons the backlog grows incessantly: 
  1. The sheer volume of newly discovered vulnerabilities, numbering in the tens of thousands each year
  2. The complexity of modern, hybrid environments, where assets are spread across on-premises data centers and multiple cloud providers
  3. The monumental challenge of tracking and patching every critical vulnerability
The growing mountain of security weaknesses creates a form of vulnerability debt. It accumulates when you defer patching due to operational constraints, resource limitations, or the fear of breaking critical applications.  The longer a vulnerability remains unpatched, the more time attackers have to develop exploits and launch attacks and turn even a low-priority issue into a full-blown crisis. 

The True, Multifaceted Cost of Inaction 

The costs associated with a large vulnerability backlog are both direct and indirect, affecting your organization’s financial health, operational agility, and human capital. 

Financial and Operational Drains 

The most obvious cost is the direct expense of remediation. That includes the salaries of security professionals who spend countless hours identifying, prioritizing, and deploying patches.  However, the indirect costs are often far greater. Developer productivity plummets when teams are constantly pulled away from building new features to address security issues. It affects the time-to-market for new products and services, handing an advantage to more agile competitors.  In case of a breach from an unpatched vulnerability, the financial fallout can be devastating. It can encompass everything from regulatory fines and legal fees to customer compensation and a drop in stock value. 

The Human Toll 

Beyond the financial and operational impact is the human cost. When security teams drown in a sea of alerts, alert fatigue is unavoidable. And with it, missed critical warnings amidst the terrible alert noise, too.  The constant pressure and the feeling of being perpetually behind contribute to high levels of stress and burnout, resulting in the high turnover of skilled security talent. And here is your vicious cycle: experienced professionals leave; the remaining team is stretched even thinner; and the backlog continues to grow.  This state can also strain the relationship between security, development, and operations teams, preventing the collaboration necessary for a healthy DevSecOps culture. 

From a Reactive to a Proactive Protection 

Instead of “How can we patch faster?”, the more effective question is, “How can we neutralize security risk before we patch vulnerabilities?”.  The answer lies in moving from a predominantly reactive posture revolving around patching and response to a proactive one centered around mitigation. A robust patchless mitigation platform can effectively shield your organization’s environment from exploitation, regardless of the length of your patching cycles.  For instance, Virsec provides powerful compensating controls that prevent malicious actors from exploiting a vulnerability even if it is there and unpatched.  This approach decouples cybersecurity protection from the act of patching. It gives teams the breathing room to remediate vulnerabilities in a planned, methodical way without leaving critical systems exposed to immediate threats.  Applying these mitigation controls at scale is where the smart application of artificial intelligence becomes essential. AI-driven security tools can automate burdensome tasks in security operations centers (SOCs) and security teams.  As an illustration, Virsec’s OTTOGUARD.AI leverages agentic AI to improve security operations’ efficiency in the following way: 
  1. AI agents autonomously deploy and configure security probes to determine which code and software to trust.
  2. They integrate with your existing cybersecurity tool stack to analyze telemetry, assess your risk environment, and identify assets that can be protected immediately (without patching).
  3. They then interface with IT service management platforms, such as ServiceNow, presenting human experts with validated remediation and patching solutions for the remaining issues. Human experts have the final word, reviewing the suggested solutions and deciding whether to act on them.

Foster a Culture of Shared Responsibility 

Technology alone is not a panacea. The most effective vulnerability management programs stand on a strong security culture that breaks down silos between development, security, and operations.  Hence, before anything else, strive to build this culture of collaboration and unified goals. It will inevitably instill a sense of shared responsibility for your organization’s security posture and motivate every individual to be a proactive guardian against threats. 

Final Thoughts 

By combining proactive protection with AI-driven automation and a culture of shared responsibility, organizations can begin to tame their vulnerability backlogs.  This multi-layered approach helps you reduce the risk of a breach, frees up valuable resources, accelerates innovation, and builds a more resilient and future-proof enterprise.  Its goal is to transform security from a cost center and a source of friction into a true business enabler. Because that's what cybersecurity really is: an essential business enabler that makes it possible for organizations to innovate with confidence in an increasingly complex digital world. 

5 Things CISOs, CTOs & CFOs Must Learn From Anthropic’s Autonomous AI Cyberattack Findings

18 November 2025 at 02:28

autonomous AI cyberattack

The revelation that a Chinese state-sponsored group (GTG-1002) used Claude Code to execute a large-scale autonomous AI cyberattack marks a turning point for every leadership role tied to security, technology, or business risk. This was not an AI-assisted intrusion; it was a fully operational AI-powered cyber threat where the model carried out reconnaissance, exploitation, credential harvesting, and data exfiltration with minimal human involvement. Anthropic confirmed that attackers launched thousands of requests per second, targeting 30 global organizations at a speed no human operator could match. With humans directing just 10–20% of the campaign, this autonomous AI cyberattack is the strongest evidence yet that the threat landscape has shifted from human-paced attacks to machine-paced operations. For CISOs, CTOs, and even CFOs, this is not just a technical incident — it’s a strategic leadership warning. autonomous AI cyberattack

1. Machine-Speed Attacks Redefine Detection Expectations

The GTG-1002 actors didn’t use AI as a side tool — they let it run the operation end-to-end. The autonomous AI cyberattack mapped internal services, analyzed authentication paths, tailored exploitation payloads, escalated privileges, and extracted intelligence without stopping to “wait” for a human.
  • CISO takeaway: Detection windows must shrink from hours to minutes.
  • CTO takeaway: Environments must be designed to withstand parallelized, machine-speed probing.
  • CFO takeaway: Investments in real-time detection are no longer “nice to have,” but essential risk mitigation.
Example: Claude autonomously mapped hundreds of internal services across multiple IP ranges and identified high-value databases — work that would take humans days, executed in minutes.

2. Social Engineering Now Targets AI — Not the User

One of the most important elements of this autonomous AI cyberattack is that attackers didn’t technically “hack” Claude. They manipulated it. GTG-1002 socially engineered the model by posing as a cybersecurity firm performing legitimate penetration tests. By breaking tasks into isolated, harmless-looking requests, they bypassed safety guardrails without triggering suspicion.
  • CISO takeaway: AI governance and model-behavior monitoring must become core security functions.
  • CTO takeaway: Treat enterprise AI systems as employees vulnerable to manipulation.
  • CFO takeaway: AI misuse prevention deserves dedicated budget.
Example: Each isolated task Claude executed seemed benign — but together, they formed a full exploitation chain.

3. AI Can Now Run a Multi-Stage Intrusion With Minimal Human Input

This wasn’t a proof-of-concept; it produced real compromises. The GTG-1002 cyberattack involved:
  • autonomous reconnaissance
  • autonomous exploitation
  • autonomous privilege escalation
  • autonomous lateral movement
  • autonomous intelligence extraction
  • autonomous backdoor creation
The entire intrusion lifecycle was carried out by an autonomous threat actor, with humans stepping in only for strategy approvals.
  • CISO takeaway: Assume attackers can automate everything.
  • CTO takeaway: Zero trust and continuous authentication must be strengthened.
  • CFO takeaway: Business continuity plans must consider rapid compromise — not week-long dwell times.
Example: In one case, Claude spent 2–6 hours mapping a database environment, extracting sensitive data, and summarizing findings for human approval — all without manual analysis.

4. AI Hallucinations Are a Defensive Advantage

Anthropic’s investigation uncovered a critical flaw: Claude frequently hallucinated during the autonomous AI cyberattack, misidentifying credentials, fabricating discoveries, or mistaking public information for sensitive intelligence. For attackers, this is a reliability gap. For defenders, it’s an opportunity.
  • CISO takeaway: Honeytokens, fake credentials, and decoy environments can confuse AI-driven intrusions.
  • CTO takeaway: Build detection rules for high-speed but inconsistent behavior — a hallmark of hallucinating AI.
  • CFO takeaway: Deception tech becomes a high-ROI strategy in an AI-augmented threat landscape.
Example: Some of Claude’s “critical intelligence findings” were completely fabricated — decoys could amplify this confusion.

5. AI for Defense Is Now a Necessity, Not a Strategy Discussion

Anthropic’s response made something very clear: defenders must adopt AI at the same speed attackers are. During the Anthropic AI investigation, their threat intelligence team deployed Claude to analyze large volumes of telemetry, correlate distributed attack patterns, and validate activity. This marks the era where defensive AI systems become operational requirements.
  • CISO takeaway: Begin integrating AI into SOC workflows now.
  • CTO takeaway: Implement AI-driven alert correlation and proactive threat detection.
  • CFO takeaway: AI reduces operational load while expanding detection scope, a strategic investment.

Leadership Must Evolve Before the Next Wave Arrives

This incident represents the beginning of AI-powered cyber threats, not the peak. Executives must collaborate to:
  • adopt AI for defense
  • redesign detection for machine-speed adversaries
  • secure internal AI platforms
  • prepare for attacks requiring almost no human attacker involvement
As attackers automate reconnaissance, exploitation, lateral movement, and exfiltration, defenders must automate detection, response, and containment. The autonomous AI cyberattack era has begun. Leaders who adapt now will weather the next wave, leaders who don’t will be overwhelmed by it.
❌