Reading view

Trends to Watch in the California Legislature

If you’re a Californian, there are a few new state laws that you should know will be going into effect in the new year. EFF has worked hard in Sacramento this session to advance bills that protect privacy, fight surveillance, and promote transparency.

California’s legislature runs in a two-year cycle, meaning that it’s currently halftime for legislators. As we prepare for the next year of the California legislative session in January, it’s a good time to showcase what’s happened so far—and what’s left to do.

Wins Worth Celebrating

In a win for every Californian’s privacy rights, we were happy to support A.B. 566 (Assemblymember Josh Lowenthal). This is a common-sense law that makes California’s main consumer data privacy law, the California Consumer Privacy Act, more user-friendly. It requires that browsers support people’s rights to send opt-out signals, such as the global opt-out in Privacy Badger, to businesses. Managing your privacy as an individual can be a hard job, and EFF wants stronger laws that make it easier for you to do so.

Additionally, we were proud to advance government transparency by supporting A.B. 1524 (Judiciary Committee), which allows members of the public to make copies of public court records using their own devices, such as cell-phone cameras and overhead document scanners, without paying fees.

We also supported two bills that will improve law enforcement accountability at a time when we desperately need it. S.B. 627 (Senator Scott Wiener) prohibits law enforcement officers from wearing masks to avoid accountability (The Trump administration has sued California over this law). We also supported S.B. 524 (Asm. Jesse Arreguín), which requires law enforcement to disclose when a police report was written using artificial intelligence.

On the To-Do List for Next Year

On the flip side, we also stopped some problematic bills from becoming law. This includes S.B. 690 (Sen. Anna Caballero), which we dubbed the Corporate Coverup Act. This bill would have gutted California’s wiretapping statute by allowing businesses to ignore those privacy rights for “any business purpose.” Working with several coalition partners, we were able to keep that bill from moving forward in 2025. We do expect to see it come back in 2026, and are ready to fight back against those corporate business interests.

And, of course, not every fight ended in victory. There are still many areas where we have work left to do. California Governor Gavin Newsom vetoed a bill we supported, S.B. 7, which would have given workers in California greater transparency into how their employers use artificial intelligence and was sponsored by the California Federation of Labor Unions. S.B. 7  was vetoed in response to concerns from companies including Uber and Lyft, but we expect to continue working with the labor community on the ways AI affects the workplace in 2026.

Trends of Note

California continued a troubling years-long trend of lawmakers pushing problematic proposals that would require every internet user to verify their age to access information—often by relying on privacy-invasive methods to do so. Earlier this year EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online. We continue to raise these concerns, and would welcome working with any lawmaker in California on a better solution.

We also continue to keep a close eye on government data sharing. On this front, there is some good news. Several of the bills we supported this year sought to place needed safeguards on the ways various government agencies in California share data. These include: A.B. 82 (Asm. Chris Ward) and S.B. 497 (Wiener), which would add privacy protections to data collected by the state about those who may be receiving gender-affirming or reproductive health care; A.B. 1303 (Asm. Avelino Valencia), which prohibits warrantless data sharing from California’s low-income broadband program to immigration and other government officials; and S.B. 635 (Sen. Maria Elena Durazo), which places similar limits on data collected from sidewalk vendors.

We are also heartened to see California correct course on broad government data sharing. Last session, we opposed A.B. 518 (Asm. Buffy Wicks), which let state agencies ignore existing state privacy law to allow broader information sharing about people eligible for CalFresh—the state’s federally funded food assistance program. As we’ve seen, the federal government has since sought data from food assistance programs to use for other purposes. We were happy to have instead supported A.B. 593 this year, also authored by Asm. Wicks—which reversed course on that data sharing.

We hope to see this attention to the harms of careless government data sharing continue. EFF’s sponsored bill this year, A.B. 1337, would update and extend vital privacy safeguards present at the state agency level to counties and cities. These local entities today collect enormous amounts of data and administer programs that weren’t contemplated when the original law was written in 1977. That information should be held to strong privacy standards.

We’ve been fortunate to work with Asm. Chris Ward, who is also the chair of the LGBTQ Caucus in the legislature, on that bill. The bill stalled in the Senate Judiciary Committee during the 2025 legislative session, but we plan to bring it back in the next session with a renewed sense of urgency.

  •  

Google Finds Five China-Nexus Groups Exploiting React2Shell Flaw

Chinese cybercrime illegal online gambling

Researchers with Google Threat Intelligence Group have detected five China-nexus threat groups exploiting the maximum-security React2Shell security flaw to drop a number of malicious payloads, from backdoors to downloaders to tunnelers.

The post Google Finds Five China-Nexus Groups Exploiting React2Shell Flaw appeared first on Security Boulevard.

  •  

Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership

Ambiguity isn't just a challenge. It's a leadership test - and most fail it.

I want to start with something that feels true but gets ignored way too often.

Most of us in leadership roles have a love hate relationship with ambiguity. We say we embrace it... until it shows up for real. Then we freeze, hedge our words, or pretend we have a plan. Cybersecurity teams deal with ambiguity all the time. Its in threat intel you cant quite trust, in stakeholder demands that swing faster than markets, in patch rollouts that go sideways. But ambiguity isnt a bug to be fixed. Its a condition to be led through.

[Image: A leader facing a foggy maze of digital paths - ambiguity as environment.]

Lets break this down the way I see it, without jazz hands or buzzwords.

Ambiguity isn't uncertainty. Its broader.  

Uncertainty is when you lack enough data to decide. Ambiguity is when even the terms of the problem are in dispute. Its not just what we don't know. Its what we cant define yet. In leadership terms, that feels like being handed a puzzle where some pieces aren't even shaped yet. This is classic VUCA territory - volatility, uncertainty, complexity and ambiguity make up the modern landscape leaders sit in every day. 

[Image: The dual nature of ambiguity - logic on one side, uncertainty on the other.]

Here is the blunt truth. Great leaders don't eliminate ambiguity. They engage with it. They treat ambiguity like a partner you've gotta dance with, not a foe to crush.

Ambiguity is a leadership signal  

When a situation is ambiguous, its telling you something. Its saying your models are incomplete, or your language isn't shared, or your team has gaps in context. Stanford researchers and communication experts have been talking about this recently: ambiguity often reflects a gap in the shared mental model across the team. If you're confused, your team probably is too. 

A lot of leadership texts treat ambiguity like an enemy of clarity. But thats backward. Ambiguity is the condition that demands sensemaking. Sensemaking is the real work. Its the pattern of dialogue and iteration that leads to shared understanding amid chaos. That means asking the hard questions out loud, not silently wishing for clarity.

If your team seems paralyzed, unclear, or checked out - it might not be them. It might be you.

Leaders model calm confusion  

Think about that phrase. Calm confusion. Leaders rarely say, "I don't know." Instead they hedge, hide, or overcommit. But leaders who effectively navigate ambiguity do speak up about what they don't know. Not to sound vulnerable in a soft way, but to anchor the discussion in reality. That model gives permission for others to explore unknowns without fear.

I once watched a director hold a 45-minute meeting to "gain alignment" without once stating the problem. Everyone left more confused than when they walked in. That’s not leadership. That's cover.

There is a delicate balance here. You don't turn every ambiguous situation into a therapy session. Instead, you create boundaries around confusion so the team knows where exploration stops and action begins. Good leaders hold this tension.

Move through ambiguity with frameworks, not polish  

Here is a practical bit. One common way to get stuck is treating decisions as if they're singular. But ambiguous situations usually contain clusters of decisions wrapped together. A good framework is to break the big, foggy problem into smaller, more combinable decisions. Clarify what is known, identify the assumptions you are making, and make provisional calls on the rest. Treat them like hypotheses to test, not laws of motion.

In cybersecurity, this looks like mapping your threat intel to scenarios where you know the facts, then isolating the areas of guesswork where your team can experiment or prepare contingencies. Its not clean. But it beats paralysis.

Teams learn differently under ambiguity  

If you have ever noticed that your best team members step up in times of clear crises, but shut down when the goals are vague, you're observing humans responding to ambiguity differently. Some thirst for structure. Others thrive in gray zones. As a leader, you want both. You shape the context so self starters can self start, and then you steward alignment so the whole group isnt pulling in four directions.

Theres a counterintuitive finding in team research: under certain conditions, ambiguity enables better collaborative decision making because the absence of a single voice forces people to share and integrate knowledge more deeply. But this only works when there is a shared understanding of the task and a culture of open exchange. 

Lead ambiguity, don't manage it  

Managing ambiguity sounds like you're trying to tighten it up, reduce it, or push it into a box. Leading ambiguity is different. It's about moving with the uncertainty. Encouraging experiments. Turning unknowns into learning loops. Recognizing iterative decision processes rather than linear ones.

And yes, that approach feels messy. Good. Leadership is messy. The only thing worse than ambiguity is false certainty. I've been in too many rooms where leaders pretended to know the answer, only to cost time, credibility, or talent. You can be confident without being certain. That's leadership.

But there's a flip side no one talks about.

Sometimes leaders use ambiguity as a shield. They stay vague, push decisions down the org, and let someone else take the hit if it goes sideways. I've seen this pattern more than once. Leaders who pass the fog downstream and call it empowerment. Except it's not. It's evasion. And it sets people up to fail.

Real leaders see ambiguity for what it is: a moment to step up and mentor. To frame the unknowns, offer scaffolding, and help others think through it with some air cover. The fog is a chance to teach — not disappear.

But the hard truth? Some leaders can't handle the ambiguity themselves. So they deflect. They repackage their own discomfort as a test of independence, when really they're just dodging responsibility. And sometimes, yeah, it feels intentional. They act like ambiguity builds character... but only because they're too insecure or inexperienced to lead through it.

The result is the same: good people get whiplash. Goals shift. Ownership blurs. Trust erodes. And the fog thickens.

There's research on this, too. It's called role ambiguity — when you're not clear on what's expected, what your job even is, or how success gets measured. People in those situations don't just get frustrated. They burn out. They overcompensate for silence. They stop trusting. And productivity tanks. It's not about needing a five-year plan. It's about needing a shared frame to work from. Leadership sets that tone.

Leading ambiguity means owning the fog, not outsourcing it.

Ambiguity isn't a one-off problem. It's a perpetual condition, especially in cybersecurity and executive realms where signals are weak and stakes are high. The real skill isn't clarity. It's resilience. The real job isn't prediction. It's navigation.

Lead through ambiguity by embracing the fog, not burying it. And definitely not dumping it on someone else.

When the fog rolls in, what kind of leader are you really?

#

Sources / Resources List

The post Leading Through Ambiguity: Decision-Making in Cybersecurity Leadership appeared first on Security Boulevard.

  •  

NDSS 2025 – Selective Data Protection against Memory Leakage Attacks for Serverless Platforms

Session 6B: Confidential Computing 1

Authors, Creators & Presenters: Maryam Rostamipoor (Stony Brook University), Seyedhamed Ghavamnia (University of Connecticut), Michalis Polychronakis (Stony Brook University)

PAPER
LeakLess: Selective Data Protection against Memory Leakage Attacks for Serverless Platforms

As the use of language-level sandboxing for running untrusted code grows, the risks associated with memory disclosure vulnerabilities and transient execution attacks become increasingly significant. Besides the execution of untrusted JavaScript or WebAssembly code in web browsers, serverless environments have also started relying on language-level isolation to improve scalability by running multiple functions from different customers within a single process. Web browsers have adopted process-level sandboxing to mitigate memory leakage attacks, but this solution is not applicable in serverless environments, as running each function as a separate process would negate the performance benefits of language-level isolation. In this paper we present LeakLess, a selective data protection approach for serverless computing platforms. LeakLess alleviates the limitations of previous selective data protection techniques by combining in-memory encryption with a separate I/O module to enable the safe transmission of the protected data between serverless functions and external hosts. We implemented LeakLess on top of the Spin serverless platform, and evaluated it with real-world serverless applications. Our results demonstrate that LeakLess offers robust protection while incurring a minor throughput decrease under stress-testing conditions of up to 2.8% when the I/O module runs on a different host than the Spin runtime, and up to 8.5% when it runs on the same host.


ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Selective Data Protection against Memory Leakage Attacks for Serverless Platforms appeared first on Security Boulevard.

  •  

News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026

Frankfurt, Dec. 16, 2025, CyberNewswire — Link11, a European provider of web infrastructure security solutions, has released new insights outlining five key cybersecurity developments expected to influence how organizations across Europe prepare for and respond to threats in 2026.… (more…)

The post News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026 first appeared on The Last Watchdog.

The post News Alert: Link11’s Top 5 cybersecurity trends set to shape European defense strategies in 2026 appeared first on Security Boulevard.

  •  

Code Execution in Jupyter Notebook Exports

After our research on Cursor, in the context of developer-ecosystem security, we turn our attention to the Jupyter ecosystem. We expose security risks we identified in the notebook’s export functionality, in the default Windows environment, to help organizations better protect their assets and networks. Executive Summary We identified a new way external Jupyter notebooks could […]

The post Code Execution in Jupyter Notebook Exports appeared first on Blog.

The post Code Execution in Jupyter Notebook Exports appeared first on Security Boulevard.

  •  

Veza Extends Reach to Secure and Govern AI Agents

Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise. Currently in the process of being acquired by ServiceNow, the platform is based on an Access Graph the company previously developed to provide cybersecurity teams with a..

The post Veza Extends Reach to Secure and Govern AI Agents appeared first on Security Boulevard.

  •  

Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions

Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Seceon Inc.

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Security Boulevard.

  •  

Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries

As cyber threats against regulated industries continue to escalate in scale, sophistication, and financial impact, organizations are under immense pressure to modernize security operations while meeting strict compliance requirements. Addressing this urgent need, Seceon has announced a strategic partnership with InterSources Inc., expanding the delivery of AI-driven cybersecurity solutions across some of the world’s most

The post Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries appeared first on Seceon Inc.

The post Seceon Announces Strategic Partnership with InterSources Inc. to Expand Delivery of AI-Driven Cybersecurity Across Regulated Industries appeared first on Security Boulevard.

  •  

🪪 Age Verification Is Coming for the Internet | EFFector 37.18

The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.

In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.

Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.

LISTEN TO EFFECTOR

EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

  •  

Compromised IAM Credentials Power a Large AWS Crypto Mining Campaign

An ongoing campaign has been observed targeting Amazon Web Services (AWS) customers using compromised Identity and Access Management (IAM) credentials to enable cryptocurrency mining. The activity, first detected by Amazon's GuardDuty managed threat detection service and its automated security monitoring systems on November 2, 2025, employs never-before-seen persistence techniques to hamper

  •  

Rogue NuGet Package Poses as Tracer.Fody, Steals Cryptocurrency Wallet Data

Cybersecurity researchers have discovered a new malicious NuGet package that typosquats and impersonates the popular .NET tracing library and its author to sneak in a cryptocurrency wallet stealer. The malicious package, named "Tracer.Fody.NLog," remained on the repository for nearly six years. It was published by a user named "csnemess" on February 26, 2020. It masquerades as "Tracer.Fody,"

  •  

Can a Transparent Piece of Plastic Win the Invisible War on Your Identity?

Identity systems hold modern life together, yet we barely notice them until they fail. Every time someone starts a new job, crosses a border, or walks into a secure building, an official must answer one deceptively simple question: Is this person really who they claim to be? That single moment—matching a living, breathing human to..

The post Can a Transparent Piece of Plastic Win the Invisible War on Your Identity? appeared first on Security Boulevard.

  •  

Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions

Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Seceon Inc.

The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Security Boulevard.

  •  

Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance

As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.

However, AI systems do not operate in isolation. They rely heavily on Application Programming Interfaces (APIs) to ingest training data, serve model inferences, and facilitate communication between agents and servers. Consequently, the API attack surface effectively becomes the AI attack surface. Securing these API pathways is fundamental to achieving the "Secure and Resilient" and "Privacy-Enhanced" characteristics mandated by the framework.

Understanding the NIST AI RMF Core

The NIST AI RMF is organized around four core functions that provide a structure for managing risk throughout the AI lifecycle:

  • GOVERN: Cultivates a culture of risk management and outlines processes, documents, and organizational schemes.
  • MAP: Establishes context to frame risks, identifying interdependencies and visibility gaps.
  • MEASURE: Employs tools and methodologies to analyze, assess, and monitor AI risk and related impacts.
  • MANAGE: Prioritizes and acts upon risks, allocating resources to respond to and recover from incidents.

The Critical Role of API Posture Governance

While the "GOVERN" function in the NIST framework focuses on organizational culture and policies, API Posture Governance serves as the technical enforcement mechanism for these policies in operational environments.

Without robust API posture governance, organizations struggle to effectively Manage or Govern their AI risks. Unvetted AI models may be deployed via shadow APIs, and sensitive training data can be exposed through misconfigurations. Automating posture governance ensures that every API connected to an AI system adheres to security standards, preventing the deployment of insecure models and ensuring your AI infrastructure remains compliant by design.

How Salt Security Safeguards AI Systems

Salt Security provides a tailored solution that aligns directly with the NIST AI RMF. By securing the API layer (Agentic AI Action Layer), Salt Security helps organizations maintain the integrity of their AI systems and safeguard sensitive data. The key features, along with their direct correlations to NIST AI RMF functions, include:

Automated API Discovery:

  • Alignment: Supports the MAP function by establishing context and recognizing risk visibility gaps.
  • Outcome: Guarantees a complete inventory of all APIs, including shadow APIs used for AI training or inference, ensuring no part of the AI ecosystem is unmanaged.

Posture Governance:

  • Alignment: Operationalizes the GOVERN and MANAGE functions by enabling organizational risk culture and prioritizing risk treatment.
  • Outcome: Preserves secure APIs throughout their lifecycle, enforcing policies that prevent the deployment of insecure models and ensuring ongoing compliance with NIST standards.

AI-Driven Threat Detection:

  • Alignment: Meets the Secure & Resilient trustworthiness characteristic by defending against adversarial misuse and exfiltration attacks.
  • Outcome: Actively identifies and blocks sophisticated threats like model extraction, data poisoning, and prompt injection attacks in real-time.

Sensitive Data Visibility:

  • Alignment: Supports the Privacy-Enhanced characteristic by safeguarding data confidentiality and limiting observation.
  • Outcome: Oversees data flow through APIs to protect PII and sensitive training data, ensuring data minimization and privacy compliance.

Vulnerability Assessment:

  • Alignment: Assists in the MEASURE function by assessing system trustworthiness and testing for failure modes.
  • Outcome: Identifies logic flaws and misconfigurations in AI-connected APIs before they can be exploited by adversaries.

Conclusion

Trustworthy AI requires secure APIs. By implementing API Posture Governance and comprehensive security controls, organizations can confidently adopt the NIST AI RMF and innovate safely. Salt Security provides the visibility and protection needed to secure the critical infrastructure powering your AI. For a more in-depth understanding of API security compliance across multiple regulations, please refer to our comprehensive API Compliance Whitepaper.

If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security's research team and learn what attackers already know.

The post Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance appeared first on Security Boulevard.

  •  

Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage

Breaking Free from Security Silos in the Modern Enterprise Today’s organizations face an unprecedented challenge: securing increasingly complex IT environments that span on-premises data centers, multiple cloud platforms, and hybrid architectures. Traditional security approaches that rely on disparate point solutions are failing to keep pace with sophisticated threats, leaving critical gaps in visibility and response

The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Seceon Inc.

The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Security Boulevard.

  •  

SoundCloud Confirms Security Incident

SoundCloud confirmed today that it experienced a security incident involving unauthorized access to a supporting internal system, resulting in the exposure of certain user data. The company said the incident affected approximately 20 percent of its users and involved email addresses along with information already visible on public SoundCloud profiles. Passwords and financial information were […]

The post SoundCloud Confirms Security Incident appeared first on Centraleyes.

The post SoundCloud Confirms Security Incident appeared first on Security Boulevard.

  •  

T.H.E. Journal: How Schools Can Reduce Digital Distraction Without Surveillance

This article was originally published in T.H.E. Journal on 12/10/25 by Charlie Sander. Device-based learning is no longer “new,” but many schools still lack a coherent playbook for managing it. Many school districts dashed to adopt 1:1 computing during the pandemic, spending $48 million on new devices to ensure every child had a platform to take classes ...

The post T.H.E. Journal: How Schools Can Reduce Digital Distraction Without Surveillance appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.

The post T.H.E. Journal: How Schools Can Reduce Digital Distraction Without Surveillance appeared first on Security Boulevard.

  •  

SoundCloud, Pornhub, and 700Credit all reported data breaches, but the similarities end there

Comparing data breaches is like comparing apples and oranges. They differ on many levels. To news media, the size of the brand, how many users were impacted, and how it was done often dominate the headlines. For victims, what really matters is the type of information stolen. And for the organizations involved, the focus is on how they will handle the incident. So, let’s have a look at the three that showed up in the news feeds today.

700Credit

700Credit is a US provider of credit reports, preliminary credit checks, identity verification, fraud detection, and compliance tools for automobile, recreational vehicle, powersports, and marine dealerships.

In a notice on its website, 700Credit informed media, partners, and affected individuals that it suffered a third-party supply-chain attack in late October 2025. According to the notice, an attacker gained unauthorized access to personally identifiable information (PII), including names, addresses, dates of birth, and Social Security numbers (SSNs). The breach involves data collected between May and October, impacting roughly 5.6 million people.

The supply-chain attack demonstrates the importance of how you handle attacks. Reportedly, 700Credit communicates with more than 200 integration partners through application programming interfaces (APIs). When one of the partners was compromised in July, they failed to notify 700Credit. As a result, unnamed cybercriminals broke into that third-party’s system and exploited an API used to pull consumer information.

700Credit shut down the exposed third-party API, notified the FBI and FTC, and is mailing letters to victims offering credit monitoring while coordinating with dealers and state regulators.

SoundCloud

SoundCloud is a leading audio streaming platform where users can upload, promote, stream, and share music, podcasts, and other audio content.

SoundCloud posted a notice on its website stating that it recently detected unauthorized activity in an ancillary service dashboard. Ancillary services refer to specialized functions that help maintain stability and reliability. When SoundCloud contained the attack, it experienced denial-of-service attacks, two of which were able to temporarily disable its platform’s availability on the web.

An investigation found that no sensitive data such as financial or password data was accessed. The exposed data consisted of email addresses and information already visible on public SoundCloud profiles. The company estimates the incident affected roughly 20% of its user base.

Pornhub

Pornhub is one of the world’s most visited adult video-sharing websites, allowing users to view content anonymously or create accounts to upload and interact with videos.

Reportedly, Pornhub disclosed that on November 8, 2025, a security breach at third-party analytics provider Mixpanel exposed “a limited set of analytics events for certain users.” Pornhub stressed that this was not a breach of Pornhub’s own systems, and said that passwords, payment details, and financial information were not exposed. Mixpanel, however, disputes that the data originated from its November 2025 security incident.

According to reports, the ShinyHunters ransomware group claims to have obtained about 94 GB of data containing more than 200 million analytics records tied to Pornhub Premium activity. ShinyHunters shared a data sample with BleepingComputer that included a Pornhub Premium member’s email address, activity type, location, video URL, video name, keywords associated with the video, and the time the event occurred.

ShinyHunters has told BleepingComputer that it sent extortion demands to Pornhub, and the nature of the exposed data creates clear risks for blackmail, outing, and reputational harm—even though no Social Security numbers, government IDs, or payment card details are in the scope of the breach.

Comparing apples and oranges

As you can see, these are three very different data breaches. Not just in how they happened, but in what they mean for the people affected.

While email addresses and knowing that someone uses SoundCloud could be useful for phishers and scammers, it’s a long way from the leverage that comes with detailed records of Pornhub Premium activity. If that doesn’t get you on the list of a “hello pervert” scammer, I don’t know what will.

But undoubtedly the most dangerous one for those affected is the 700Credit breach which provides an attacker with enough information for identity theft. In the other cases an attacker will have to penetrate another defense layer, but with a successful identity theft the attacker has reached an important goal.

AspectSoundCloud700CreditPornhub
People affectedEstimated ~28–36 million users (about 20% of users) ​~5.6 million people ​“Select” Premium users; ~201 million activity records (not 201 million people) ​
Leaked dataEmail addresses and public profile info ​Names, addresses, dates of birth, SSNs ​​Search, watch, and download activity; attacker-shared samples include email addresses, timestamps, and IP/geo-location data
Sensitivity levelLow (mostly already public contact/profile data) ​Very high (classic identity‑theft PII) ​​Very high (intimate behavioral and preference data, blackmail/extortion potential) ​
Breach causeUnauthorized access to an internal service dashboard ​Third‑party API compromise (supply‑chain attack) ​​Disputed incident involving third-party analytics data (Mixpanel), following a smishing campaign

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

  •  

Android mobile adware surges in second half of 2025

Android users spent 2025 walking a tighter rope than ever, with malware, data‑stealing apps, and SMS‑borne scams all climbing sharply while attackers refined their business models around mobile data and access.

Looking back, we may view 2025 as the year when one-off scams were replaced on the score charts by coordinated, well-structured, attack frameworks.

Comparing two equal six‑month periods—December 2024 through May 2025 versus June through November 2025—our data shows Android adware detections nearly doubled (90% increase), while PUP detections increased by roughly two‑thirds and malware detections by about 20%.

The strong rise in SMS-based attacks we flagged in June indicates that 2025 is the payoff year. The capabilities to steal one‑time passcodes are no longer experimental; they’re being rolled into campaigns at scale.

The shift from nuisances to serious crime

Looking at 2024 as a whole, malware and PUPs together made up almost 90% of Android detections, with malware rising to about 43% of the total and potentially unwanted programs (PUPs) to 45%, while adware slid to around 12%.

That mix tells an important story: Attackers are spending less effort on noisy annoyance apps and more on tools that can quietly harvest data, intercept messages, or open the door to full account takeover.

But that’s not because adware and PUP numbers went down.

Shahak Shalev, Head of AI and Scam Research at Malwarebytes pointed out: 

The holiday season may have just kicked off, but cybercriminals have been laying the groundwork for months for successful Android malware campaigns. In the second half of 2025, we observed a clear escalation in mobile threats. Adware volumes nearly doubled, driven by aggressive families like MobiDash, while PUP detections surged, suggesting attackers are experimenting with new delivery mechanisms. I urge everyone to stay vigilant over the holidays and not be tempted to click on sponsored ads, pop-ups or shop via social media. If an offer is too good to be true, it usually is.”  

For years, Android/Adware.MobiDash has been one of the most common unwanted apps on Android. MobiDash comes as an adware software development kit (SDK) that developers (or repackagers) bolt onto regular apps to flood users with pop‑ups after a short delay. In 2025 it still shows up in our stats month after month, with thousands of detections under the MobiDash family alone.

So, threats like MobiDash are far from gone, but they increasingly become background noise against more serious threats that now stand out.

Over that same December–May versus June–November window, adware detections nearly doubled, PUP detections rose by about 75%, and malware detections grew by roughly 20%.

In the adware group, MobiDash alone grew its monthly detection volume by more than 100% between early and late 2025, even as adware as a whole remained a minority share of Android threats. In just the last three months we measured, MobiDash activity surged by about 77%, with detections climbing steadily from September through November.

A more organized approach

Rather than relying on delivering a single threat, we found cybercriminals are chaining components like droppers, spying modules, and banking payloads into flexible toolkits that can be mixed and matched per campaign.

What makes this shift worrying is the breadth of what information stealers now collect. Beyond call logs and location, many samples are tuned to monitor messaging apps, browser activity, and financial interactions, creating detailed behavioral profiles that can be reused across multiple fraud schemes. As long as this data remains monetizable on underground markets, the incentive to keep these surveillance ecosystems running will only grow.

As the ThreatDown 2025 State of Malware report points out:

“Just like phishing emails, phishing apps trick users into handing over their usernames, passwords, and two-factor authentication codes. Stolen credentials can be sold or used by cybercriminals to steal valuable information and access restricted resources.”

Predatory finance apps like SpyLoan and Albiriox typically use social engineering (sometimes AI-supported) promising fast cash, low-interest loans, and minimal checks. Once installed, they harvest contacts, messages, and device identifiers, which can then be used for harassment, extortion, or cross‑platform identity abuse. Combined with access to SMS and notifications, that data lets operators watch victims juggle real debts, bank balances, and private conversations.

One of the clearest examples of this more organized approach is Triada, a long-lived remote access Trojan (RAT) for Android. In our December 2024 through May 2025 data, Triada appeared at relatively low but persistent levels. Its detections then more than doubled in the June–November period, with a pronounced spike late in the year.

Triada’s role is to give attackers a persistent foothold on the device: Once installed, it can help download or launch additional payloads, manipulate apps, and support on‑device fraud—exactly the kind of long‑term ‘infrastructure’ behavior that turns one‑off infections into ongoing operations.

Seeing a legacy threat like Triada ramp up in the same period as newer banking malware underlines that 2025 is when long‑standing mobile tools and fresh fraud kits start paying off for attackers at the same time.

If droppers, information stealers, and smishing are the scaffolding, banking Trojans are the cash register at the bottom of the funnel. Accessibility abuse, on‑device fraud, and live screen streaming, can make transactions happen inside the victim’s own banking session rather than on a cloned site. This approach sidesteps many defenses, such as device fingerprinting and some forms of multi-factor authentication (MFA). These shifts show up in the broader trend of our statistics, with more detections pointing to layered, end‑to‑end fraud pipelines.

Compared to the 2024 baseline, where phishing‑capable Android apps and OTP stealers together made up only a small fraction of all Android detections, the 2025 data shows their share growing by tens of percentage points in some months, especially around major fraud seasons.

What Android users should do now

Against this backdrop, Android users need to treat mobile security with the same seriousness as desktop and server environments. This bears repeating, as Malwarebytes research shows that people are 39% more likely to click a link on their phone than on their laptop.

 A few practical steps make a real difference:​

  • Prefer official app stores, but do not trust them blindly. Scrutinize developer reputation, reviews, and install counts, especially for financial and “utility” apps that ask for sensitive permissions.​
  • Be extremely cautious with permissions like SMS access, notification access, Accessibility, and “Display over other apps,” which show up again and again in infostealers, banking Trojans, and OTP-stealing campaigns.​​
  • Avoid sideloading and gray‑market firmware unless absolutely necessary. When possible, choose devices with a clear update policy and apply security patches promptly.​
  • Treat unexpected texts and messages—particularly those about payments, deliveries, or urgent account issues—as hostile until proven otherwise and never tap links or install apps directly from them.​​
  • Run up-to-date real-time mobile security software that can detect malicious apps, block known bad links, and flag suspicious SMS activity before it turns into full account compromise.​

Mobile threats in 2025 are no longer background noise or the exclusive domain of power users and enthusiasts. For many people, the phone is now the main attack surface—and the main gateway to their money, identity, and personal life.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

  •  

Photo booth flaw exposes people’s private pictures online

Photo booths are great. You press a button and get instant results. The same can’t be said, allegedly, for the security practices of at least one company operating them.

A security researcher spent weeks trying to warn a photo booth operator about a vulnerability in its system. The flaw reportedly exposed hundreds of customers’ private photos to anyone who knew where to look.

The researcher, who goes by the name Zeacer, said that a website operated by photo kiosk company Hama Film allowed anyone to download customer photos and videos without logging in. The Australian company provides photo kiosks for festivals, concerts, and commercial events. People take a snap and can both print it locally and also upload it to a website for retrieval later.

You would expect that such a site would be properly protected, so only you get to see yourself wearing nothing but a feather boa and guzzling from a bottle of Jack Daniels at your mate’s stag do. But reportedly, that wasn’t the case.

You get a photo! You get a photo! Everyone gets a photo!

According to TechCrunch, which has reviewed the researcher’s analysis, the website suffered from a well-known and extremely basic security flaw. TechCrunch stopped short of naming it, but mentioned sites with similar flaws where people could easily guess where files were held.

When files are stored at easily guessable locations and are not password protected, anyone can access them. Because those locations are predictable, attackers can write scripts that automatically visit them and download the files. When these files belong to users (such as photos and videos), that becomes a serious privacy risk.

At first glance, random photo theft might not sound that dangerous. But consider the possibilities. Facial recognition technology is widespread. People at events often wear lanyards with corporate affiliations or name badges. And while you might shrug off an embarrassing photos, it’s a different story if it’s a family shot and your children are in the frame. Those pictures could end up on someone’s hard drive somewhere, with no way to get them back or even know that they’ve been taken.

Companies have an ethical responsibility to respond

That’s why it’s so important for organizations to prevent the kind of basic vulnerability that Zeacer appears to have identified. They can do that by properly password-protecting files, limiting how quickly one user can access large numbers of files, and making the locations impossible to guess.

They should also acknowledge researchers and fix vulnerabilities quickly when they’re reported. According to public reports, Hama Film didn’t reply to Zeacer’s messages, but instead shortened its file retention period from roughly two to three weeks down to about 24 hours. That might narrow the attack surface, but doesn’t stop someone from scraping all images daily.

So what can you do if you used one of these booths? Sadly, little more than assume that your photos have been accessed.

Organizations that hire photo booth providers have more leverage. They can ask how long images are retained, what data protection policies are in place, whether download links are password protected and rate limited, and whether the company has undergone third-party security audits.

Hama Film isn’t the only company to fall victim to these kinds of exploits. TechCrunch has previously reported on a jury management system that exposed jurors’ personal data. Payday loan sites have leaked sensitive financial information, and in 2019, First American Financial Corp exposed 885 million files dating back 16 years.

In 2021, right-wing social network Parler saw up to 60 TB of data (including deleted posts) downloaded after hacktivists found an unprotected API with sequentially numbered endpoints. Sadly, we’re sure this latest incident won’t be the last.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

AWS Blames Russia’s GRU for Years-Long Espionage Campaign Targeting Western Energy Infrastructure

Western Critical Infrastructure, Critical infrastructure, Russian GRU, Russian Threat Actor, Sandworm, APT44, Energy Supply Chain, Energy Infrastructure

Amazon Web Services (AWS) has attributed a persistent multi-year cyber espionage campaign targeting Western critical infrastructure, particularly the energy sector, to a group strongly linked with Russia’s Main Intelligence Directorate (GRU), known widely as Sandworm (or APT44).

In a report released Monday, the cloud giant’s threat intelligence teams revealed that the Russian-nexus actor has maintained a "sustained focus" on North American and European critical infrastructure, with operations spanning from 2021 through the present day.

Misconfigured Devices are the Attackers' Gateway

Crucially, the AWS investigation found that the initial successful compromises were not due to any weakness in the AWS platform itself, but rather the exploitation of customer misconfigured devices. The threat actor is exploiting a fundamental failure in network defense, that of, customers failing to properly secure their network edge devices and virtual appliances.

The operation focuses on stealing credentials and establishing long-term persistence, often by compromising third-party network appliance software running on platforms like Amazon Elastic Compute Cloud (EC2).

AWS CISO CJ Moses commented in the report, warning, "Going into 2026, organizations must prioritize securing their network edge devices and monitoring for credential replay attacks to defend against this persistent threat."

Persistence and Credential Theft, Part of the Sandworm Playbook

AWS observed the GRU-linked group employing several key tactics, techniques, and procedures (TTPs) aligned with their historical playbook:

  1. Exploiting Misconfigurations: Leveraging customer-side mistakes, particularly in exposed network appliances, to gain initial access.

  2. Establishing Persistence: Analyzing network connections to show the actor-controlled IP addresses establishing persistent, long-term connections to the compromised EC2 instances.

  3. Credential Harvesting: The ultimate objective is credential theft, enabling the attackers to move laterally across networks and escalate privileges, often targeting the accounts of critical infrastructure operators.

AWS’s analysis of infrastructure overlaps with known Sandworm operations—a group infamous for disruptive attacks like the 2015 and 2016 power grid blackouts in Ukraine—provides high confidence in the attribution.

Recently, threat intelligence company Cyble had detected advanced backdoors targeting the defense systems and the TTPs closely resembled Russia's Sandworm playbook.

Read: Cyble Detects Advanced Backdoor Targeting Defense Systems via Belarus Military Lure

Singular Focus on the Energy Supply Chain

The targeting profile analyzed by AWS' threat intelligence teams demonstrates a calculated and sustained focus on the global energy sector supply chain, including both direct operators and the technology providers that support them:

  • Energy Sector: Electric utility organizations, energy providers, and managed security service providers (MSSPs) specializing in energy clients.

  • Technology/Cloud Services: Collaboration platforms and source code repositories essential for critical infrastructure development.

  • Telecommunications: Telecom providers across multiple regions.

The geographic scope of the targeting is global, encompassing North America, Western and Eastern Europe, and the Middle East, illustrating a strategic objective to gain footholds in the operational technology (OT) and enterprise networks that govern power distribution and energy flow across NATO countries and allies.

From Cloud Edge to Credential Theft

AWS’ telemetry exposed a methodical, five-step campaign flow that leverages customer misconfiguration on cloud-hosted devices to gain initial access:

  1. Compromise Customer Network Edge Device hosted on AWS: The attack begins by exploiting customer-side vulnerabilities or misconfigurations in network edge devices (like firewalls or virtual appliances) running on platforms like Amazon EC2.

  2. Leverage Native Packet Capture Capability: Once inside, the actor exploits the device's own native functionality to eavesdrop on network traffic.

  3. Harvest Credentials from Intercepted Traffic: The crucial step involves stealing usernames and passwords from the intercepted traffic as they pass through the compromised device.

  4. Replay Credentials Against Victim Organizations’ Online Services and Infrastructure: The harvested credentials are then "replayed" (used) to access other services, allowing the attackers to pivot from the compromised appliance into the broader victim network.

  5. Establish Persistent Access for Lateral Movement: Finally, the actors establish a covert, long-term presence to facilitate lateral movement and further espionage.

Secure the Edge and Stop Credential Replay

AWS has stated that while its infrastructure remains secure, the onus is on customers to correct the foundational security flaws that enable this campaign. The report strongly advises organizations to take immediate action on two fronts:

  • Secure Network Edge: Conduct thorough audits and patching of all network appliances and virtual devices exposed to the public internet, ensuring they are configured securely.

  • Monitor for Credential Replay: Implement advanced monitoring for indicators of compromise (IOCs) associated with credential replay and theft attacks, which the threat actors are leveraging to move deeper into target environments.

  •  

Chinese Surveillance and AI

New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:

China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.

The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.

Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.

News article.

  •  

Amazon Exposes Years-Long GRU Cyber Campaign Targeting Energy and Cloud Infrastructure

Amazon's threat intelligence team has disclosed details of a "years-long" Russian state-sponsored campaign that targeted Western critical infrastructure between 2021 and 2025. Targets of the campaign included energy sector organizations across Western nations, critical infrastructure providers in North America and Europe, and entities with cloud-hosted network infrastructure. The activity has

  •  

Why Data Security and Privacy Need to Start in Code

AI-assisted coding and AI app generation platforms have created an unprecedented surge in software development. Companies are now facing rapid growth in both the number of applications and the pace of change within those applications. Security and privacy teams are under significant pressure as the surface area they must cover is expanding quickly while their staffing levels remain largely

  •  

Fortinet FortiGate Under Active Attack Through SAML SSO Authentication Bypass

Threat actors have begun to exploit two newly disclosed security flaws in Fortinet FortiGate devices, less than a week after public disclosure. Cybersecurity company Arctic Wolf said it observed active intrusions involving malicious single sign-on (SSO) logins on FortiGate appliances on December 12, 2025. The attacks exploit two critical authentication bypasses (CVE-2025-59718 and CVE-2025-59719

  •  

React2Shell Vulnerability Actively Exploited to Deploy Linux Backdoors

The security vulnerability known as React2Shell is being exploited by threat actors to deliver malware families like KSwapDoor and ZnDoor, according to findings from Palo Alto Networks Unit 42 and NTT Security. "KSwapDoor is a professionally engineered remote access tool designed with stealth in mind," Justin Moore, senior manager of threat intel research at Palo Alto Networks Unit 42, said in a

  •  

Photo booth flaw exposes people’s private pictures online

Photo booths are great. You press a button and get instant results. The same can’t be said, allegedly, for the security practices of at least one company operating them.

A security researcher spent weeks trying to warn a photo booth operator about a vulnerability in its system. The flaw reportedly exposed hundreds of customers’ private photos to anyone who knew where to look.

The researcher, who goes by the name Zeacer, said that a website operated by photo kiosk company Hama Film allowed anyone to download customer photos and videos without logging in. The Australian company provides photo kiosks for festivals, concerts, and commercial events. People take a snap and can both print it locally and also upload it to a website for retrieval later.

You would expect that such a site would be properly protected, so only you get to see yourself wearing nothing but a feather boa and guzzling from a bottle of Jack Daniels at your mate’s stag do. But reportedly, that wasn’t the case.

You get a photo! You get a photo! Everyone gets a photo!

According to TechCrunch, which has reviewed the researcher’s analysis, the website suffered from a well-known and extremely basic security flaw. TechCrunch stopped short of naming it, but mentioned sites with similar flaws where people could easily guess where files were held.

When files are stored at easily guessable locations and are not password protected, anyone can access them. Because those locations are predictable, attackers can write scripts that automatically visit them and download the files. When these files belong to users (such as photos and videos), that becomes a serious privacy risk.

At first glance, random photo theft might not sound that dangerous. But consider the possibilities. Facial recognition technology is widespread. People at events often wear lanyards with corporate affiliations or name badges. And while you might shrug off an embarrassing photos, it’s a different story if it’s a family shot and your children are in the frame. Those pictures could end up on someone’s hard drive somewhere, with no way to get them back or even know that they’ve been taken.

Companies have an ethical responsibility to respond

That’s why it’s so important for organizations to prevent the kind of basic vulnerability that Zeacer appears to have identified. They can do that by properly password-protecting files, limiting how quickly one user can access large numbers of files, and making the locations impossible to guess.

They should also acknowledge researchers and fix vulnerabilities quickly when they’re reported. According to public reports, Hama Film didn’t reply to Zeacer’s messages, but instead shortened its file retention period from roughly two to three weeks down to about 24 hours. That might narrow the attack surface, but doesn’t stop someone from scraping all images daily.

So what can you do if you used one of these booths? Sadly, little more than assume that your photos have been accessed.

Organizations that hire photo booth providers have more leverage. They can ask how long images are retained, what data protection policies are in place, whether download links are password protected and rate limited, and whether the company has undergone third-party security audits.

Hama Film isn’t the only company to fall victim to these kinds of exploits. TechCrunch has previously reported on a jury management system that exposed jurors’ personal data. Payday loan sites have leaked sensitive financial information, and in 2019, First American Financial Corp exposed 885 million files dating back 16 years.

In 2021, right-wing social network Parler saw up to 60 TB of data (including deleted posts) downloaded after hacktivists found an unprotected API with sequentially numbered endpoints. Sadly, we’re sure this latest incident won’t be the last.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

Google is discontinuing its dark web report: why it matters

Google has announced that early next year they are discontinuing the dark web report, which was meant to monitor breach data that’s circulating on the dark web.

The news raised some eyebrows, but Google says it’s ending the feature because feedback showed the reports didn’t provide “helpful next steps.” New scans will stop on January 15, 2026, and on February 16, the entire tool will disappear along with all associated monitoring data. Early reactions are mixed: some users express disappointment and frustration, others seem largely indifferent because they already rely on alternatives, and a small group feels relieved that the worry‑inducing alerts will disappear.

All those sentiments are understandable. Knowing that someone found your information on the dark web does not automatically make you safer. You cannot simply log into a dark market forum and ask criminals to delete or return your data.

But there is value in knowing what’s out there, because it can help you respond to the situation before problems escalate. That’s where dark web and data exposure tools show their use: they turn vague fear (“Is my data out there?”) into specific risk (“This email and password are in a breach.”).

The dark web is often portrayed as a shady corner of the internet where stolen data circulates endlessly, and to some extent, that’s accurate. Password dumps, personal records, social security numbers (SSNs), and credit card details are traded for profit. Once combined into massive credential and identity databases accessible to cybercriminals, this information can be used for account takeovers, phishing, and identity fraud.

There are no tools to erase critical information that is circulating on dark web forums but that was never really the promise.

Google says it is shifting its focus towards “tools that give you more actionable steps,” like Password Manager, Security Checkup, and Results About You. Without doubt, those tools help, but they work better when users understand why they matter. Discontinuing dark web report removes a simple visibility feature, but it also reminds users that cybersecurity awareness means staying careful on the open web and understanding what attackers might use against them.

How can Malwarebytes help?

The real value comes from three actions: being aware of the exposure, cutting off easy new data sources, and reacting quickly when something goes wrong.

This is where dedicated security tools can help you.

Malwarebytes Personal Data Remover assists you in discovering and removing your data from data broker sites (among others), shrinking the pool of information that can be aggregated, resold, or used to profile you.

Our Digital Footprint scan gives you a clearer picture of where your data has surfaced online, including exposures that could eventually feed into dark web datasets.

Malwarebytes Identity Theft Protection adds ongoing monitoring and recovery support, helping you spot suspicious use of your identity and get expert help if someone tries to open accounts or take out credit in your name.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

Post-Quantum Cryptography (PQC): Application Security Migration Guide

The coming shift to Post-Quantum Cryptography (PQC) is not a distant, abstract threat—it is the single largest, most complex cryptographic migration in the history of cybersecurity. Major breakthroughs are being made with the technology. Google announced on October 22nd, “research that shows, for the first time in history, that a quantum computer can successfully run a verifiable algorithm on hardware, surpassing even the fastest classical supercomputers (13,000x faster).” It has the potential to disrupt every industry. Organizations must be ready to prepare now or pay later. 

The post Post-Quantum Cryptography (PQC): Application Security Migration Guide appeared first on Security Boulevard.

  •  

Denial-of-Service and Source Code Exposure in React Server Components

In early December 2025, the React core team disclosed two new vulnerabilities affecting React Server Components (RSC). These issues – Denial-of-Service and Source Code Exposure were found by security researchers probing the fixes for the previous week’s critical RSC vulnerability, known as “React2Shell”.  While these newly discovered bugs do not enable Remote Code Execution, meaning […]

The post Denial-of-Service and Source Code Exposure in React Server Components appeared first on Kratikal Blogs.

The post Denial-of-Service and Source Code Exposure in React Server Components appeared first on Security Boulevard.

  •  

Multiple Vulnerabilities in Apple Products Could Allow for Arbitrary Code Execution

Multiple vulnerabilities have been discovered in Apple products, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

  •  

Multiple Vulnerabilities in Google Chrome Could Allow for Arbitrary Code Execution

Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

  •  

Multiple Vulnerabilities in Adobe Products Could Allow for Arbitrary Code Execution

Multiple vulnerabilities have been discovered in Adobe products, the most severe of which could allow for arbitrary code execution.

  • Adobe ColdFusion is a rapid web application development platform that uses the ColdFusion Markup Language (CFML).
  • Adobe Experience Manager (AEM) is a content management and experience management system that helps businesses build and manage their digital presence across various platforms.
  • The Adobe DNG Software Development Kit (SDK) is a free set of tools and code from Adobe that helps developers add support for Adobe's Digital Negative (DNG) universal RAW file format into their own applications and cameras, enabling them to read, write, and process DNG images, solving workflow issues and improving archiving for digital photos.
  • Adobe Acrobat is a suite of paid tools for creating, editing, converting, and managing PDF documents.
  • The Adobe Creative Cloud desktop app is the central hub for managing all Adobe creative applications, files, and assets.

Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

  •  

Critical Patches Issued for Microsoft Products, December 9, 2025

Multiple vulnerabilities have been discovered in Microsoft products, the most severe of which could allow for remote code execution. Successful exploitation of the most severe of these vulnerabilities could result in an attacker gaining the same privileges as the logged-on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

  •