Venezuelan Oil Company Downplays Alleged US Cyberattack
The final EFFector of 2025 is here! Just in time to keep you up-to-date on the latests happenings in the fight for privacy and free speech online.
In this latest issue, we're sharing how to spot sneaky ALPR cameras at the U.S. border, covering a host of new resources on age verification laws, and explaining why AI companies need to protect chatbot logs from bulk surveillance.
Prefer to listen in? Check out our audio companion, where EFF Activist Molly Buckley explains our new resource explaining age verification laws and how you can fight back. Catch the conversation on YouTube or the Internet Archive.
EFFECTOR 37.18 - 🪪 AGE VERIFICATION IS COMING FOR THE INTERNET
Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression.
Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Whether you’re generating data from scratch or transforming sensitive production data, performant test data generators are critical tools for achieving compliance in development workflows.
The post How test data generators support compliance and data privacy appeared first on Security Boulevard.

Identity systems hold modern life together, yet we barely notice them until they fail. Every time someone starts a new job, crosses a border, or walks into a secure building, an official must answer one deceptively simple question: Is this person really who they claim to be? That single moment—matching a living, breathing human to..
The post Can a Transparent Piece of Plastic Win the Invisible War on Your Identity? appeared first on Security Boulevard.
Articles related to cyber risk quantification, cyber risk management, and cyber resilience.
The post Communicating AI Risk to the Board With Confidence | Kovrr appeared first on Security Boulevard.

Over the past week, enterprise security teams observed a combination of covert malware communication attempts and aggressive probing of publicly exposed infrastructure. These incidents, detected across firewall and endpoint security layers, demonstrate how modern cyber attackers operate simultaneously. While quietly activating compromised internal systems, they also relentlessly scan external services for exploitable weaknesses. Although the
The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Seceon Inc.
The post Real Attacks of the Week: How Spyware Beaconing and Exploit Probing Are Shaping Modern Intrusions appeared first on Security Boulevard.
Labeling adversary activity with ATT&CK techniques is a tried-and-true method for classifying behavior. But it rarely tells defenders how those behaviors are executed in real environments.
The post Extracting the How: Scaling Adversary Procedures Intelligence with AI appeared first on Security Boulevard.
Frankfurt am Main, Germany, 16th December 2025, CyberNewsWire
The post Link11 Identifies Five Cybersecurity Trends Set to Shape European Defense Strategies in 2026 appeared first on Security Boulevard.
As organizations accelerate the adoption of Artificial Intelligence, from deploying Large Language Models (LLMs) to integrating autonomous agents and Model Context Protocol (MCP) servers, risk management has transitioned from a theoretical exercise to a critical business imperative. The NIST AI Risk Management Framework (AI RMF 1.0) has emerged as the standard for managing these risks, offering a structured approach to designing, developing, and deploying trustworthy AI systems.
However, AI systems do not operate in isolation. They rely heavily on Application Programming Interfaces (APIs) to ingest training data, serve model inferences, and facilitate communication between agents and servers. Consequently, the API attack surface effectively becomes the AI attack surface. Securing these API pathways is fundamental to achieving the "Secure and Resilient" and "Privacy-Enhanced" characteristics mandated by the framework.
The NIST AI RMF is organized around four core functions that provide a structure for managing risk throughout the AI lifecycle:
While the "GOVERN" function in the NIST framework focuses on organizational culture and policies, API Posture Governance serves as the technical enforcement mechanism for these policies in operational environments.
Without robust API posture governance, organizations struggle to effectively Manage or Govern their AI risks. Unvetted AI models may be deployed via shadow APIs, and sensitive training data can be exposed through misconfigurations. Automating posture governance ensures that every API connected to an AI system adheres to security standards, preventing the deployment of insecure models and ensuring your AI infrastructure remains compliant by design.
Salt Security provides a tailored solution that aligns directly with the NIST AI RMF. By securing the API layer (Agentic AI Action Layer), Salt Security helps organizations maintain the integrity of their AI systems and safeguard sensitive data. The key features, along with their direct correlations to NIST AI RMF functions, include:
Trustworthy AI requires secure APIs. By implementing API Posture Governance and comprehensive security controls, organizations can confidently adopt the NIST AI RMF and innovate safely. Salt Security provides the visibility and protection needed to secure the critical infrastructure powering your AI. For a more in-depth understanding of API security compliance across multiple regulations, please refer to our comprehensive API Compliance Whitepaper.
If you want to learn more about Salt and how we can help you, please contact us, schedule a demo, or visit our website. You can also get a free API Attack Surface Assessment from Salt Security's research team and learn what attackers already know.
The post Securing the AI Frontier: How API Posture Governance Enables NIST AI RMF Compliance appeared first on Security Boulevard.

Breaking Free from Security Silos in the Modern Enterprise Today’s organizations face an unprecedented challenge: securing increasingly complex IT environments that span on-premises data centers, multiple cloud platforms, and hybrid architectures. Traditional security approaches that rely on disparate point solutions are failing to keep pace with sophisticated threats, leaving critical gaps in visibility and response
The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Seceon Inc.
The post Unified Security for On-Prem, Cloud, and Hybrid Infrastructure: The Seceon Advantage appeared first on Security Boulevard.
SoundCloud confirmed today that it experienced a security incident involving unauthorized access to a supporting internal system, resulting in the exposure of certain user data. The company said the incident affected approximately 20 percent of its users and involved email addresses along with information already visible on public SoundCloud profiles. Passwords and financial information were […]
The post SoundCloud Confirms Security Incident appeared first on Centraleyes.
The post SoundCloud Confirms Security Incident appeared first on Security Boulevard.
This article was originally published in T.H.E. Journal on 12/10/25 by Charlie Sander. Device-based learning is no longer “new,” but many schools still lack a coherent playbook for managing it. Many school districts dashed to adopt 1:1 computing during the pandemic, spending $48 million on new devices to ensure every child had a platform to take classes ...
The post T.H.E. Journal: How Schools Can Reduce Digital Distraction Without Surveillance appeared first on ManagedMethods Cybersecurity, Safety & Compliance for K-12.
The post T.H.E. Journal: How Schools Can Reduce Digital Distraction Without Surveillance appeared first on Security Boulevard.
Comparing data breaches is like comparing apples and oranges. They differ on many levels. To news media, the size of the brand, how many users were impacted, and how it was done often dominate the headlines. For victims, what really matters is the type of information stolen. And for the organizations involved, the focus is on how they will handle the incident. So, let’s have a look at the three that showed up in the news feeds today.
700Credit is a US provider of credit reports, preliminary credit checks, identity verification, fraud detection, and compliance tools for automobile, recreational vehicle, powersports, and marine dealerships.
In a notice on its website, 700Credit informed media, partners, and affected individuals that it suffered a third-party supply-chain attack in late October 2025. According to the notice, an attacker gained unauthorized access to personally identifiable information (PII), including names, addresses, dates of birth, and Social Security numbers (SSNs). The breach involves data collected between May and October, impacting roughly 5.6 million people.
The supply-chain attack demonstrates the importance of how you handle attacks. Reportedly, 700Credit communicates with more than 200 integration partners through application programming interfaces (APIs). When one of the partners was compromised in July, they failed to notify 700Credit. As a result, unnamed cybercriminals broke into that third-party’s system and exploited an API used to pull consumer information.
700Credit shut down the exposed third-party API, notified the FBI and FTC, and is mailing letters to victims offering credit monitoring while coordinating with dealers and state regulators.
SoundCloud is a leading audio streaming platform where users can upload, promote, stream, and share music, podcasts, and other audio content.
SoundCloud posted a notice on its website stating that it recently detected unauthorized activity in an ancillary service dashboard. Ancillary services refer to specialized functions that help maintain stability and reliability. When SoundCloud contained the attack, it experienced denial-of-service attacks, two of which were able to temporarily disable its platform’s availability on the web.
An investigation found that no sensitive data such as financial or password data was accessed. The exposed data consisted of email addresses and information already visible on public SoundCloud profiles. The company estimates the incident affected roughly 20% of its user base.
Pornhub is one of the world’s most visited adult video-sharing websites, allowing users to view content anonymously or create accounts to upload and interact with videos.
Reportedly, Pornhub disclosed that on November 8, 2025, a security breach at third-party analytics provider Mixpanel exposed “a limited set of analytics events for certain users.” Pornhub stressed that this was not a breach of Pornhub’s own systems, and said that passwords, payment details, and financial information were not exposed. Mixpanel, however, disputes that the data originated from its November 2025 security incident.
According to reports, the ShinyHunters ransomware group claims to have obtained about 94 GB of data containing more than 200 million analytics records tied to Pornhub Premium activity. ShinyHunters shared a data sample with BleepingComputer that included a Pornhub Premium member’s email address, activity type, location, video URL, video name, keywords associated with the video, and the time the event occurred.
ShinyHunters has told BleepingComputer that it sent extortion demands to Pornhub, and the nature of the exposed data creates clear risks for blackmail, outing, and reputational harm—even though no Social Security numbers, government IDs, or payment card details are in the scope of the breach.
As you can see, these are three very different data breaches. Not just in how they happened, but in what they mean for the people affected.
While email addresses and knowing that someone uses SoundCloud could be useful for phishers and scammers, it’s a long way from the leverage that comes with detailed records of Pornhub Premium activity. If that doesn’t get you on the list of a “hello pervert” scammer, I don’t know what will.
But undoubtedly the most dangerous one for those affected is the 700Credit breach which provides an attacker with enough information for identity theft. In the other cases an attacker will have to penetrate another defense layer, but with a successful identity theft the attacker has reached an important goal.
| Aspect | SoundCloud | 700Credit | Pornhub |
| People affected | Estimated ~28–36 million users (about 20% of users) | ~5.6 million people | “Select” Premium users; ~201 million activity records (not 201 million people) |
| Leaked data | Email addresses and public profile info | Names, addresses, dates of birth, SSNs | Search, watch, and download activity; attacker-shared samples include email addresses, timestamps, and IP/geo-location data |
| Sensitivity level | Low (mostly already public contact/profile data) | Very high (classic identity‑theft PII) | Very high (intimate behavioral and preference data, blackmail/extortion potential) |
| Breach cause | Unauthorized access to an internal service dashboard | Third‑party API compromise (supply‑chain attack) | Disputed incident involving third-party analytics data (Mixpanel), following a smishing campaign |
We don’t just report on threats—we help safeguard your entire digital identity
Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.
Android users spent 2025 walking a tighter rope than ever, with malware, data‑stealing apps, and SMS‑borne scams all climbing sharply while attackers refined their business models around mobile data and access.
Looking back, we may view 2025 as the year when one-off scams were replaced on the score charts by coordinated, well-structured, attack frameworks.
Comparing two equal six‑month periods—December 2024 through May 2025 versus June through November 2025—our data shows Android adware detections nearly doubled (90% increase), while PUP detections increased by roughly two‑thirds and malware detections by about 20%.
The strong rise in SMS-based attacks we flagged in June indicates that 2025 is the payoff year. The capabilities to steal one‑time passcodes are no longer experimental; they’re being rolled into campaigns at scale.
Looking at 2024 as a whole, malware and PUPs together made up almost 90% of Android detections, with malware rising to about 43% of the total and potentially unwanted programs (PUPs) to 45%, while adware slid to around 12%.
That mix tells an important story: Attackers are spending less effort on noisy annoyance apps and more on tools that can quietly harvest data, intercept messages, or open the door to full account takeover.
But that’s not because adware and PUP numbers went down.
Shahak Shalev, Head of AI and Scam Research at Malwarebytes pointed out:
The holiday season may have just kicked off, but cybercriminals have been laying the groundwork for months for successful Android malware campaigns. In the second half of 2025, we observed a clear escalation in mobile threats. Adware volumes nearly doubled, driven by aggressive families like MobiDash, while PUP detections surged, suggesting attackers are experimenting with new delivery mechanisms. I urge everyone to stay vigilant over the holidays and not be tempted to click on sponsored ads, pop-ups or shop via social media. If an offer is too good to be true, it usually is.”
For years, Android/Adware.MobiDash has been one of the most common unwanted apps on Android. MobiDash comes as an adware software development kit (SDK) that developers (or repackagers) bolt onto regular apps to flood users with pop‑ups after a short delay. In 2025 it still shows up in our stats month after month, with thousands of detections under the MobiDash family alone.
So, threats like MobiDash are far from gone, but they increasingly become background noise against more serious threats that now stand out.
Over that same December–May versus June–November window, adware detections nearly doubled, PUP detections rose by about 75%, and malware detections grew by roughly 20%.
In the adware group, MobiDash alone grew its monthly detection volume by more than 100% between early and late 2025, even as adware as a whole remained a minority share of Android threats. In just the last three months we measured, MobiDash activity surged by about 77%, with detections climbing steadily from September through November.
Rather than relying on delivering a single threat, we found cybercriminals are chaining components like droppers, spying modules, and banking payloads into flexible toolkits that can be mixed and matched per campaign.
What makes this shift worrying is the breadth of what information stealers now collect. Beyond call logs and location, many samples are tuned to monitor messaging apps, browser activity, and financial interactions, creating detailed behavioral profiles that can be reused across multiple fraud schemes. As long as this data remains monetizable on underground markets, the incentive to keep these surveillance ecosystems running will only grow.
As the ThreatDown 2025 State of Malware report points out:
“Just like phishing emails, phishing apps trick users into handing over their usernames, passwords, and two-factor authentication codes. Stolen credentials can be sold or used by cybercriminals to steal valuable information and access restricted resources.”
Predatory finance apps like SpyLoan and Albiriox typically use social engineering (sometimes AI-supported) promising fast cash, low-interest loans, and minimal checks. Once installed, they harvest contacts, messages, and device identifiers, which can then be used for harassment, extortion, or cross‑platform identity abuse. Combined with access to SMS and notifications, that data lets operators watch victims juggle real debts, bank balances, and private conversations.
One of the clearest examples of this more organized approach is Triada, a long-lived remote access Trojan (RAT) for Android. In our December 2024 through May 2025 data, Triada appeared at relatively low but persistent levels. Its detections then more than doubled in the June–November period, with a pronounced spike late in the year.
Triada’s role is to give attackers a persistent foothold on the device: Once installed, it can help download or launch additional payloads, manipulate apps, and support on‑device fraud—exactly the kind of long‑term ‘infrastructure’ behavior that turns one‑off infections into ongoing operations.
Seeing a legacy threat like Triada ramp up in the same period as newer banking malware underlines that 2025 is when long‑standing mobile tools and fresh fraud kits start paying off for attackers at the same time.
If droppers, information stealers, and smishing are the scaffolding, banking Trojans are the cash register at the bottom of the funnel. Accessibility abuse, on‑device fraud, and live screen streaming, can make transactions happen inside the victim’s own banking session rather than on a cloned site. This approach sidesteps many defenses, such as device fingerprinting and some forms of multi-factor authentication (MFA). These shifts show up in the broader trend of our statistics, with more detections pointing to layered, end‑to‑end fraud pipelines.
Compared to the 2024 baseline, where phishing‑capable Android apps and OTP stealers together made up only a small fraction of all Android detections, the 2025 data shows their share growing by tens of percentage points in some months, especially around major fraud seasons.
Against this backdrop, Android users need to treat mobile security with the same seriousness as desktop and server environments. This bears repeating, as Malwarebytes research shows that people are 39% more likely to click a link on their phone than on their laptop.
A few practical steps make a real difference:
Mobile threats in 2025 are no longer background noise or the exclusive domain of power users and enthusiasts. For many people, the phone is now the main attack surface—and the main gateway to their money, identity, and personal life.
We don’t just report on phone security—we provide it
Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.
Photo booths are great. You press a button and get instant results. The same can’t be said, allegedly, for the security practices of at least one company operating them.
A security researcher spent weeks trying to warn a photo booth operator about a vulnerability in its system. The flaw reportedly exposed hundreds of customers’ private photos to anyone who knew where to look.
The researcher, who goes by the name Zeacer, said that a website operated by photo kiosk company Hama Film allowed anyone to download customer photos and videos without logging in. The Australian company provides photo kiosks for festivals, concerts, and commercial events. People take a snap and can both print it locally and also upload it to a website for retrieval later.
You would expect that such a site would be properly protected, so only you get to see yourself wearing nothing but a feather boa and guzzling from a bottle of Jack Daniels at your mate’s stag do. But reportedly, that wasn’t the case.
According to TechCrunch, which has reviewed the researcher’s analysis, the website suffered from a well-known and extremely basic security flaw. TechCrunch stopped short of naming it, but mentioned sites with similar flaws where people could easily guess where files were held.
When files are stored at easily guessable locations and are not password protected, anyone can access them. Because those locations are predictable, attackers can write scripts that automatically visit them and download the files. When these files belong to users (such as photos and videos), that becomes a serious privacy risk.
At first glance, random photo theft might not sound that dangerous. But consider the possibilities. Facial recognition technology is widespread. People at events often wear lanyards with corporate affiliations or name badges. And while you might shrug off an embarrassing photos, it’s a different story if it’s a family shot and your children are in the frame. Those pictures could end up on someone’s hard drive somewhere, with no way to get them back or even know that they’ve been taken.
That’s why it’s so important for organizations to prevent the kind of basic vulnerability that Zeacer appears to have identified. They can do that by properly password-protecting files, limiting how quickly one user can access large numbers of files, and making the locations impossible to guess.
They should also acknowledge researchers and fix vulnerabilities quickly when they’re reported. According to public reports, Hama Film didn’t reply to Zeacer’s messages, but instead shortened its file retention period from roughly two to three weeks down to about 24 hours. That might narrow the attack surface, but doesn’t stop someone from scraping all images daily.
So what can you do if you used one of these booths? Sadly, little more than assume that your photos have been accessed.
Organizations that hire photo booth providers have more leverage. They can ask how long images are retained, what data protection policies are in place, whether download links are password protected and rate limited, and whether the company has undergone third-party security audits.
Hama Film isn’t the only company to fall victim to these kinds of exploits. TechCrunch has previously reported on a jury management system that exposed jurors’ personal data. Payday loan sites have leaked sensitive financial information, and in 2019, First American Financial Corp exposed 885 million files dating back 16 years.
In 2021, right-wing social network Parler saw up to 60 TB of data (including deleted posts) downloaded after hacktivists found an unprotected API with sequentially numbered endpoints. Sadly, we’re sure this latest incident won’t be the last.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
![]()
Amazon Web Services (AWS) has attributed a persistent multi-year cyber espionage campaign targeting Western critical infrastructure, particularly the energy sector, to a group strongly linked with Russia’s Main Intelligence Directorate (GRU), known widely as Sandworm (or APT44).
In a report released Monday, the cloud giant’s threat intelligence teams revealed that the Russian-nexus actor has maintained a "sustained focus" on North American and European critical infrastructure, with operations spanning from 2021 through the present day.
Crucially, the AWS investigation found that the initial successful compromises were not due to any weakness in the AWS platform itself, but rather the exploitation of customer misconfigured devices. The threat actor is exploiting a fundamental failure in network defense, that of, customers failing to properly secure their network edge devices and virtual appliances.
The operation focuses on stealing credentials and establishing long-term persistence, often by compromising third-party network appliance software running on platforms like Amazon Elastic Compute Cloud (EC2).
AWS CISO CJ Moses commented in the report, warning, "Going into 2026, organizations must prioritize securing their network edge devices and monitoring for credential replay attacks to defend against this persistent threat."
AWS observed the GRU-linked group employing several key tactics, techniques, and procedures (TTPs) aligned with their historical playbook:
Exploiting Misconfigurations: Leveraging customer-side mistakes, particularly in exposed network appliances, to gain initial access.
Establishing Persistence: Analyzing network connections to show the actor-controlled IP addresses establishing persistent, long-term connections to the compromised EC2 instances.
Credential Harvesting: The ultimate objective is credential theft, enabling the attackers to move laterally across networks and escalate privileges, often targeting the accounts of critical infrastructure operators.
AWS’s analysis of infrastructure overlaps with known Sandworm operations—a group infamous for disruptive attacks like the 2015 and 2016 power grid blackouts in Ukraine—provides high confidence in the attribution.
Recently, threat intelligence company Cyble had detected advanced backdoors targeting the defense systems and the TTPs closely resembled Russia's Sandworm playbook.
The targeting profile analyzed by AWS' threat intelligence teams demonstrates a calculated and sustained focus on the global energy sector supply chain, including both direct operators and the technology providers that support them:
Energy Sector: Electric utility organizations, energy providers, and managed security service providers (MSSPs) specializing in energy clients.
Technology/Cloud Services: Collaboration platforms and source code repositories essential for critical infrastructure development.
Telecommunications: Telecom providers across multiple regions.
The geographic scope of the targeting is global, encompassing North America, Western and Eastern Europe, and the Middle East, illustrating a strategic objective to gain footholds in the operational technology (OT) and enterprise networks that govern power distribution and energy flow across NATO countries and allies.
AWS’ telemetry exposed a methodical, five-step campaign flow that leverages customer misconfiguration on cloud-hosted devices to gain initial access:
Compromise Customer Network Edge Device hosted on AWS: The attack begins by exploiting customer-side vulnerabilities or misconfigurations in network edge devices (like firewalls or virtual appliances) running on platforms like Amazon EC2.
Leverage Native Packet Capture Capability: Once inside, the actor exploits the device's own native functionality to eavesdrop on network traffic.
Harvest Credentials from Intercepted Traffic: The crucial step involves stealing usernames and passwords from the intercepted traffic as they pass through the compromised device.
Replay Credentials Against Victim Organizations’ Online Services and Infrastructure: The harvested credentials are then "replayed" (used) to access other services, allowing the attackers to pivot from the compromised appliance into the broader victim network.
Establish Persistent Access for Lateral Movement: Finally, the actors establish a covert, long-term presence to facilitate lateral movement and further espionage.
AWS has stated that while its infrastructure remains secure, the onus is on customers to correct the foundational security flaws that enable this campaign. The report strongly advises organizations to take immediate action on two fronts:
Secure Network Edge: Conduct thorough audits and patching of all network appliances and virtual devices exposed to the public internet, ensuring they are configured securely.
Monitor for Credential Replay: Implement advanced monitoring for indicators of compromise (IOCs) associated with credential replay and theft attacks, which the threat actors are leveraging to move deeper into target environments.
New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:
China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.
The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.
Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.
News article.
Photo booths are great. You press a button and get instant results. The same can’t be said, allegedly, for the security practices of at least one company operating them.
A security researcher spent weeks trying to warn a photo booth operator about a vulnerability in its system. The flaw reportedly exposed hundreds of customers’ private photos to anyone who knew where to look.
The researcher, who goes by the name Zeacer, said that a website operated by photo kiosk company Hama Film allowed anyone to download customer photos and videos without logging in. The Australian company provides photo kiosks for festivals, concerts, and commercial events. People take a snap and can both print it locally and also upload it to a website for retrieval later.
You would expect that such a site would be properly protected, so only you get to see yourself wearing nothing but a feather boa and guzzling from a bottle of Jack Daniels at your mate’s stag do. But reportedly, that wasn’t the case.
According to TechCrunch, which has reviewed the researcher’s analysis, the website suffered from a well-known and extremely basic security flaw. TechCrunch stopped short of naming it, but mentioned sites with similar flaws where people could easily guess where files were held.
When files are stored at easily guessable locations and are not password protected, anyone can access them. Because those locations are predictable, attackers can write scripts that automatically visit them and download the files. When these files belong to users (such as photos and videos), that becomes a serious privacy risk.
At first glance, random photo theft might not sound that dangerous. But consider the possibilities. Facial recognition technology is widespread. People at events often wear lanyards with corporate affiliations or name badges. And while you might shrug off an embarrassing photos, it’s a different story if it’s a family shot and your children are in the frame. Those pictures could end up on someone’s hard drive somewhere, with no way to get them back or even know that they’ve been taken.
That’s why it’s so important for organizations to prevent the kind of basic vulnerability that Zeacer appears to have identified. They can do that by properly password-protecting files, limiting how quickly one user can access large numbers of files, and making the locations impossible to guess.
They should also acknowledge researchers and fix vulnerabilities quickly when they’re reported. According to public reports, Hama Film didn’t reply to Zeacer’s messages, but instead shortened its file retention period from roughly two to three weeks down to about 24 hours. That might narrow the attack surface, but doesn’t stop someone from scraping all images daily.
So what can you do if you used one of these booths? Sadly, little more than assume that your photos have been accessed.
Organizations that hire photo booth providers have more leverage. They can ask how long images are retained, what data protection policies are in place, whether download links are password protected and rate limited, and whether the company has undergone third-party security audits.
Hama Film isn’t the only company to fall victim to these kinds of exploits. TechCrunch has previously reported on a jury management system that exposed jurors’ personal data. Payday loan sites have leaked sensitive financial information, and in 2019, First American Financial Corp exposed 885 million files dating back 16 years.
In 2021, right-wing social network Parler saw up to 60 TB of data (including deleted posts) downloaded after hacktivists found an unprotected API with sequentially numbered endpoints. Sadly, we’re sure this latest incident won’t be the last.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
Google has announced that early next year they are discontinuing the dark web report, which was meant to monitor breach data that’s circulating on the dark web.
The news raised some eyebrows, but Google says it’s ending the feature because feedback showed the reports didn’t provide “helpful next steps.” New scans will stop on January 15, 2026, and on February 16, the entire tool will disappear along with all associated monitoring data. Early reactions are mixed: some users express disappointment and frustration, others seem largely indifferent because they already rely on alternatives, and a small group feels relieved that the worry‑inducing alerts will disappear.
All those sentiments are understandable. Knowing that someone found your information on the dark web does not automatically make you safer. You cannot simply log into a dark market forum and ask criminals to delete or return your data.
But there is value in knowing what’s out there, because it can help you respond to the situation before problems escalate. That’s where dark web and data exposure tools show their use: they turn vague fear (“Is my data out there?”) into specific risk (“This email and password are in a breach.”).
The dark web is often portrayed as a shady corner of the internet where stolen data circulates endlessly, and to some extent, that’s accurate. Password dumps, personal records, social security numbers (SSNs), and credit card details are traded for profit. Once combined into massive credential and identity databases accessible to cybercriminals, this information can be used for account takeovers, phishing, and identity fraud.
There are no tools to erase critical information that is circulating on dark web forums but that was never really the promise.
Google says it is shifting its focus towards “tools that give you more actionable steps,” like Password Manager, Security Checkup, and Results About You. Without doubt, those tools help, but they work better when users understand why they matter. Discontinuing dark web report removes a simple visibility feature, but it also reminds users that cybersecurity awareness means staying careful on the open web and understanding what attackers might use against them.
The real value comes from three actions: being aware of the exposure, cutting off easy new data sources, and reacting quickly when something goes wrong.
This is where dedicated security tools can help you.
Malwarebytes Personal Data Remover assists you in discovering and removing your data from data broker sites (among others), shrinking the pool of information that can be aggregated, resold, or used to profile you.
Our Digital Footprint scan gives you a clearer picture of where your data has surfaced online, including exposures that could eventually feed into dark web datasets.
Malwarebytes Identity Theft Protection adds ongoing monitoring and recovery support, helping you spot suspicious use of your identity and get expert help if someone tries to open accounts or take out credit in your name.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
The coming shift to Post-Quantum Cryptography (PQC) is not a distant, abstract threat—it is the single largest, most complex cryptographic migration in the history of cybersecurity. Major breakthroughs are being made with the technology. Google announced on October 22nd, “research that shows, for the first time in history, that a quantum computer can successfully run a verifiable algorithm on hardware, surpassing even the fastest classical supercomputers (13,000x faster).” It has the potential to disrupt every industry. Organizations must be ready to prepare now or pay later.
The post Post-Quantum Cryptography (PQC): Application Security Migration Guide appeared first on Security Boulevard.
Why fixing every vulnerability is impossible—and unnecessary. Learn how risk-based vulnerability management prioritizes what to patch, what to defer, and why context matters more than CVSS.
The post Why We’ll Never Patch Everything, and That’s Okay appeared first on Security Boulevard.
In early December 2025, the React core team disclosed two new vulnerabilities affecting React Server Components (RSC). These issues – Denial-of-Service and Source Code Exposure were found by security researchers probing the fixes for the previous week’s critical RSC vulnerability, known as “React2Shell”. While these newly discovered bugs do not enable Remote Code Execution, meaning […]
The post Denial-of-Service and Source Code Exposure in React Server Components appeared first on Kratikal Blogs.
The post Denial-of-Service and Source Code Exposure in React Server Components appeared first on Security Boulevard.
Multiple vulnerabilities have been discovered in Apple products, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Multiple vulnerabilities have been discovered in Adobe products, the most severe of which could allow for arbitrary code execution.
Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Multiple vulnerabilities have been discovered in Microsoft products, the most severe of which could allow for remote code execution. Successful exploitation of the most severe of these vulnerabilities could result in an attacker gaining the same privileges as the logged-on user. Depending on the privileges associated with the user, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
Multiple vulnerabilities have been discovered in Mozilla products, the most severe of which could allow for arbitrary code execution.
Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
A vulnerability in the React Server Components (RSC) implementation has been discovered that could allow for remote code execution. Specifically, it could allow for unauthenticated remote code execution on affected servers. The issue stems from unsafe deserialization of RSC “Flight” protocol payloads, enabling an attacker to send a crafted request that triggers execution of code on the server. This is now being called, “React2Shell” by security researchers.
A vulnerability has been discovered SonicOS, which could allow for Denial of Service (DoS). SonicOS is the operating system that runs on SonicWall's network security appliances, such as firewalls. Successful exploitation of this vulnerability could allow a remote unauthenticated attacker to cause Denial of Service (DoS), which could cause an impacted firewall to crash. This vulnerability ONLY impacts the SonicOS SSLVPN interface or service if enabled on the firewall.
Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.
![]()
![]()
India's Central Bureau of Investigation uncovered and disrupted a large-scale cyber fraud infrastructure, which it calls a "phishing SMS factory," that sent lakhs of smishing messages daily across the country to trick citizens into fake digital arrests, loan scams, and investment frauds.
The infrastructure that was operated by a registered company, M/s Lord Mahavira Services India Pvt. Ltd., used an online platform to control approximately 21,000 SIM cards that were obtained by violating the Department of Telecommunications rules.
The organized cyber gang operating from Northern India provided bulk SMS services to cybercriminals including foreign operators targeting Indian citizens. The CBI arrested three individuals associated to the cyber gang as part of the broader Operation Chakra-V, which is focused on breaking the backbone of cybercrime infrastructure in India.
The investigation began when CBI studied the huge volume of fake SMS messages people receive daily that often lead to serious financial fraud. Working closely with the Department of Telecommunications and using information from various sources including the highly debated Sanchar Saathi portal, investigators identified the private company allegedly running the "phishing SMS factory.
CBI conducted searches at several locations of North India including Delhi, Noida, and Chandigarh, where it discovered a completely active system used for sending phishing messages. The infrastructure included servers, communication devices, USB hubs, dongles, and thousands of SIM cards operating continuously to dispatch fraud messages.
The messages offered fake loans, investment opportunities, and other financial benefits aimed at stealing personal and banking details from innocent people. The scale of operations enabled lakhs of fraud messages to be distributed every day across India.
Early findings of the investigations suggested an involvement of certain channel partners of telecom companies and their employees who helped illegally arrange SIM cards for the fraudulent operations. This insider facilitation allowed the gang to obtain the massive quantity of SIM cards despite telecommunications regulations designed to prevent such accumulation.
The 21,000 SIM cards were controlled through an online platform specifically designed to send bulk messages, the CBI said.
CBI also seized important digital evidence, unaccounted cash, and cryptocurrency during the operation. The seizures provide investigators with critical data to trace financial flows, identify additional conspirators, and understand the full scope of the fraud network's operations.
The discovery that foreign cyber criminals were using this service to cheat Indian citizens highlights the transnational nature of the operation, with domestic infrastructure being leveraged by overseas fraudsters to target vulnerable Indians.
The dismantling of this phishing SMS factory demonstrates CBI's strategy under Operation Chakra-V to attack the technical backbone of organized cybercrime rather than merely arresting individual fraudsters. By disrupting the infrastructure enabling mass fraud communications, authorities aim to prevent thousands of potential victims from receiving deceptive messages.
As part of Operation Chakra-V crackdown, on Sunday, CBI also filed charges against 17 individuals including four likely Chinese nationals and 58 companies for their alleged involvement in a transnational cyber fraud network operating across multiple Indian states.
CBI said a single cybercrime syndicate was behind this extensive digital and financial infrastructure that has already defrauded thousands of Indians worth more than ₹1,000 crore. The operators used misleading loan apps, fake investment schemes, Ponzi and MLM models, fake part-time job offers, and fraudulent online gaming platforms for carrying out the cyber fraud. Google advertisements, bulk SMS campaigns, SIM-box based messaging systems, cloud infrastructure, fintech platforms and multiple mule bank account were all part of the modus operandi of this cybercriminal network. Earlier last week, the CBI had filed similar charges against 30 people including two Chinese nationals who ran shell companies and siphoned money from Indian investors through fake cryptocurrency mining platforms, loan apps, and fake online job offers during the COVID-19 lockdown period.![]()
![]()
![]()
“Well, I can say that Indian Companies so far has been rather negligent of customer's privacy. Anywhere you go, they ask for your mobile number.”The DPDP Act is designed to ensure that such casual indifference to personal data does not survive the next decade. Below are eight fundamental ways the DPDP Act will change how Indian companies handle data in 2026, with real-world implications for businesses, consumers, and the digital economy.
According to Shashank Bajpai, CISO & CTSO at YOTTA, “The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.”In 2026, privacy decisions will increasingly sit with boards, CXOs, and risk committees. Metrics such as consent opt-out rates, breach response time, and third-party risk exposure will become leadership-level conversations, not IT footnotes.
As Gauravdeep Singh, State Head (Digital Transformation), e-Mission Team, MeitY, explains, “Data Principal = YOU.”Whether it’s a food delivery app requesting location access or a fintech platform processing transaction history, individuals gain the right to control how their data is used—and to change their mind later.
Shukla highlights how deeply embedded poor practices have been, “Hotels take your aadhaar card or driving license and copy and keep it in the drawers inside files without ever telling the customer about their policy regarding the disposal of such PII data safely and securely.”In 2026, undefined retention is no longer acceptable.
As Shukla notes, “The shops, E-commerce establishments, businesses, utilities collect so much customer PII, and often use third party data processor for billing, marketing and outreach. We hardly ever get to know how they handle the data.”In 2026, companies will be required to audit vendors, strengthen contracts, and ensure processors follow DPDP-compliant practices, because liability remains with the fiduciary.
As Bajpai notes, “The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.”Tabletop exercises, breach simulations, and forensic readiness will become standard—not optional.
As Bajpai observes, “This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech.”Consent Managers, auditors, privacy tech vendors, and compliance platforms will grow rapidly in 2026. For Indian startups, DPDP compliance itself becomes a business opportunity.
One Reddit user captured the risk succinctly, “On paper, the DPDP Act looks great… But a law is only as strong as public awareness around it.”Companies that communicate transparently and respect user choice will win trust. Those that don’t will lose customers long before regulators step in.
As Hareesh Tibrewala, CEO at Anhad, notes, “Organizations now have the opportunity to prepare a roadmap for DPDP implementation.”For many businesses, however, the challenge lies in turning awareness into action, especially when clarity around timelines and responsibilities is still evolving. The concern extends beyond citizens to companies themselves, many of which are still grappling with core concepts such as consent management, data fiduciary obligations, and breach response requirements. With penalties tiered by the nature and severity of violations—ranging from significant fines to amounts running into hundreds of crores, this lack of understanding could prove costly. In 2026, regulators will no longer be looking for intent, they will be looking for evidence of execution. As Bajpai points out, “That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.”
As Sandeep Shukla cautions, “It will probably take years before a proper implementation at all levels of organizations would be seen.”But the direction is clear. Personal data in India can no longer be treated casually. The DPDP Act marks the end of informal data handling, and the beginning of a more disciplined, transparent, and accountable digital economy.