Normal view

Received yesterday — 12 December 2025

Password Manager LastPass Penalized £1.2m by ICO for Security Failures

12 December 2025 at 03:23

LastPass UK

The Information Commissioner’s Office (ICO) has fined password manager provider LastPass UK Ltd £1.2 million following a 2022 data breach that compromised the personal information of up to 1.6 million people in the UK. The data breach occurred in August 2022 and was the result of two isolated incidents that, when combined, enabled a hacker to gain unauthorized access to LastPass’ backup database. The stolen information included customer names, email addresses, phone numbers, and stored website URLs. While the data breach exposed sensitive personal information, the ICO confirmed there is no evidence that hackers were able to decrypt customer passwords. This is due to LastPass’ use of a ‘zero knowledge’ encryption system, which ensures that master passwords and vaults are stored locally on customer devices and never shared with the company.

Incident One: Corporate Laptop Compromised

The first incident involved a LastPass employee’s corporate laptop based in Europe. A hacker gained access to the company’s development environment and obtained encrypted company credentials. Although no personal information was taken at this stage, the credentials could have provided access to the backup database if decrypted. LastPass attempted to mitigate the hacker’s activity and believed the encryption keys remained safe, as they were stored outside the compromised environment in the vaults of four senior employees.

Incident Two: Personal Device Targeted

The second incident proved more damaging. The hacker targeted one of the senior employees who had access to the decryption keys. Exploiting a known vulnerability in a third‑party streaming service, the attacker gained access to the employee’s personal device. A keylogger was installed, capturing the employee’s master password. Multi‑factor authentication was bypassed using a trusted device cookie. This allowed the hacker to access both the employee’s personal and business LastPass vaults, which were linked by a single master password. From there, the hacker obtained the Amazon Web Service (AWS) access key and decryption key stored in the business vault. Combined with information taken the previous day, this enabled the extraction of the backup database containing customer personal information.

ICO’s Findings and Fine on LastPass UK

The ICO investigation concluded that LastPass failed to implement sufficiently strong technical and security measures, leaving customers exposed. Although the company’s zero knowledge encryption protected passwords, the exposure of personal data was deemed a serious failure. John Edwards, UK Information Commissioner, stated: “Password managers are a safe and effective tool for businesses and the public to manage their numerous login details, and we continue to encourage their use. However, as is clear from this incident, businesses offering these services should ensure that system access and use is restricted to reduce risks of attack. LastPass customers had a right to expect their personal information would be kept safe and secure. The company fell short of this expectation, resulting in the proportionate fine announced today.”

Lessons for Businesses

The ICO has urged all UK businesses to review their systems and procedures to prevent similar risks. This case underscores the importance of restricting system access, strengthening cybersecurity measures, and ensuring that employees’ personal devices do not become weak points in corporate networks. While password managers remain a recommended tool for managing login details, the incident shows that even trusted providers can fall short if internal safeguards are not sufficiently strong. The £1.2 million fine against LastPass UK Ltd serves as a clear reminder that companies handling sensitive data must uphold the highest standards of security. Although customer passwords were protected by the company’s zero knowledge encryption system, the exposure of personal information has left millions vulnerable. The ICO’s ruling reinforces the need for constant vigilance in the face of growing cyber threats. For both businesses and individuals, the message is straightforward: adopt strong security practices, conduct regular system reviews, and implement robust employee safeguards to reduce the risk of future data breaches.
Received before yesterday

Coupang CEO Resigns After Massive Data Breach Exposes Millions of Users

10 December 2025 at 02:42

Coupang CEO Resigns

Coupang CEO Resigns, a headline many in South Korea expected, but still signals a major moment for the country’s tech and e-commerce landscape. Coupang Corp. confirmed on Wednesday that its CEO, Park Dae-jun, has stepped down following a massive Coupang data breach that exposed the personal information of 33.7 million people, almost two-thirds of the country. Park said he was “deeply sorry” for the incident and accepted responsibility both for the breach and for the company’s response. His exit, while formally described as a resignation, is widely seen as a forced departure given the scale of the fallout and growing anger among customers and regulators. To stabilize the company, Coupang’s U.S. parent, Coupang Inc., has appointed Harold Rogers, its chief administrative officer and general counsel, as interim CEO. The parent company said the leadership change aims to strengthen crisis management and ease customer concerns.

What Happened in the Coupang Data Breach

The company clarified that the latest notice relates to the previously disclosed incident on November 29 and that no new leak has occurred. According to Coupang’s ongoing investigation, the leaked information includes:
  • Customer names and email addresses
  • Full shipping address book details, such as names, phone numbers, addresses, and apartment entrance access codes
  • Portions of the order information
Coupang emphasized that payment details, passwords, banking information, and customs clearance codes were not compromised. As soon as it identified the leak, the company blocked abnormal access routes and tightened internal monitoring. It is now working closely with the Ministry of Science and ICT, the National Police Agency, the Personal Information Protection Commission (PIPC), the Korea Internet & Security Agency (KISA), and the Financial Supervisory Service.

Phishing, Smishing, and Impersonation Alerts

Coupang warned customers to be extra cautious as leaked data can fuel impersonation scams. The company reminded users that:
  • Coupang never asks customers to install apps via phone or text.
  • Unknown links in messages should not be opened.
  • Suspicious communications should be reported to 112 or the Financial Supervisory Service.
  • Customers must verify messages using Coupang’s official customer service numbers.
Users who stored apartment entrance codes in their delivery address book were also urged to change them immediately. The company also clarified that delivery drivers rarely call customers unless necessary to access a building or resolve a pickup issue, a small detail meant to help people recognize potential scam attempts.

Coupang CEO Resigns as South Korea Toughens Cyber Rules

The departure of CEO Park comes at a time when South Korea is rethinking how corporations respond to data breaches. The government’s 2025 Comprehensive National Cybersecurity Strategy puts direct responsibility on CEOs for major security incidents. It also expands CISOs' authority, strengthens IT asset management requirements, and gives chief privacy officers greater influence over security budgets. This shift follows other serious breaches, including SK Telecom’s leak of 23 million user records, which led to a record 134.8 billion won fine. Regulators are now considering fines of up to 1.2 trillion won for Coupang, roughly 3% of its annual sales, under the Personal Information Protection Act. The company also risks losing its ISMS-P certification, a possibility unprecedented for a business of its size.

Industry Scramble After a Coupang Data Breach of This Scale

A Coupang Data breach affecting tens of millions of people has sent shockwaves across South Korea’s corporate sector. Authorities have launched emergency inspections of 1,600 ISMS-certified companies and begun unannounced penetration tests. Security vendors say Korean companies are urgently adding multi-factor authentication, AI-based anomaly detection, insider threat monitoring, and stronger access controls. Police naming a former Chinese Coupang employee as a suspect has intensified focus on insider risk. Government agencies, including the National Intelligence Service, are also working with private partners to shorten cyber-incident analysis times from 14 days to 5 days using advanced AI forensic labs.

Looking Ahead

With the Coupang CEO's resignation development now shaping the company’s crisis trajectory, Coupang faces a long road to rebuilding trust among users and regulators. The company says its teams are working to resolve customer concerns quickly, but the broader lesson is clear: cybersecurity failures now carry real consequences, including at the highest levels of leadership.

Black Friday Cybersecurity Survival Guide: Protect Yourself from Scams & Attacks

24 November 2025 at 07:38

Black Friday

Black Friday has evolved into one of the most attractive periods of the year, not just for retailers, but for cybercriminals too. As shoppers rush to grab limited-time deals, attackers exploit the surge in online activity through malware campaigns, phishing scams, payment fraud, and impersonation attacks. With threat actors using increasingly advanced methods, understanding the risks is essential for both shoppers and businesses preparing for peak traffic. This cybersecurity survival guide breaks down the most common Black Friday threats and offers practical steps to stay secure in 2025’s high-risk threat landscape.

Why Black Friday Is a Goldmine for Cybercriminals

Black Friday and Cyber Monday trigger massive spikes in online transactions, email promotions, digital ads, and account logins. This high-volume environment creates the perfect disguise for malicious activity. Attackers know users are expecting deal notifications, promo codes, and delivery updates, making them more likely to click without verifying legitimacy. Retailers also face increased pressure to scale infrastructure quickly, often introducing misconfigurations or security gaps that cybercriminals actively look for.

Common Black Friday Cyber Threats

Black Friday Cybersecurity Survival Guide
  1. Phishing & Fake Deal Emails: Cybercriminals frequently impersonate major retailers to push “exclusive” deals or false order alerts. These emails often contain malicious links aimed at stealing login credentials or credit card data.
  1. Malware Hidden in Apps and Ads: Fake shopping apps and malicious ads spread rapidly during Black Friday.
  1. Fake Retail Websites: Dozens of cloned websites appear each year, mimicking popular brands with nearly identical designs. These sites exist solely to steal payment information or personal data.
  1. Payment Card Fraud & Credential Stuffing: With billions of login attempts occurring during Black Friday, attackers exploit weak or reused passwords to take over retail accounts, redeem loyalty points, or make fraudulent purchases.
  1. Marketplace Scams: Fraudulent sellers on marketplaces offer unrealistic discounts, harvest information, and often never deliver the product. Some also use sophisticated social engineering tactics to manipulate buyers.

Cybersecurity Tips for Shoppers

  • Verify Before You Click: Check URLs, sender domains, and website certificates. Avoid clicking on deal links from emails or messages.
  • Enable Multi-Factor Authentication (MFA): MFA prevents unauthorized access even if an attacker steals your password.
  • Avoid Public Wi-Fi: Unsecured networks can expose your transactions. Use mobile data or a VPN.
  • Use Secure Payment Options: Virtual cards and digital wallets limit your exposure during a breach.
  • Download Apps Only from Official Stores: Stay away from third-party downloads or promo apps not approved by Google or Apple.
Best Practices for Retailers
  • Strengthen Threat Detection & Monitoring: Retailers must monitor unusual login behavior, bot traffic, and transaction spikes. Cyble’s Attack Surface and Threat Intelligence solutions help businesses identify fake domains, phishing lures, and malware campaigns targeting their brand.
  • Secure Payment Infrastructure: Ensure payment systems are PCI-compliant, updated, and protected from card-skimming malware.
  • Educate Customers: Proactively notify customers about known scams and impersonation risks, especially during high-traffic sales periods.
With malware, phishing, and fraud attempts rising sharply during the shopping season, awareness and proactive defense are essential. By staying vigilant and leveraging trusted cybersecurity tools, both shoppers and businesses can navigate Black Friday securely. See how Cyble protects retailers during high-risk shopping seasons. Book your free 20-minute demo now.

Cyble and BOCRA Sign MoU to Strengthen Botswana’s National Cybersecurity Framework

20 November 2025 at 07:36

Cyble and BOCRA Sign MoU

Cyble and the Botswana Communications Regulatory Authority (BOCRA) have announced a strategic Memorandum of Understanding (MoU). The Cyble and BOCRA MoU is designed to provide stronger defenses, improved detection capabilities, and faster incident response for critical sectors across Botswana.  The agreement, formed in collaboration with the Botswana National CSIRT, marks an important step toward enhancing the country’s national cybersecurity posture at a time when global cyber threats continue to escalate.  

Strengthening National Cybersecurity Capabilities 

Under the Cyble and BOCRA MoU, both organizations will work closely to advance Botswana’s cybersecurity ecosystem. The collaboration will focus on building stronger cyber defense mechanisms, improving incident response readiness, and equipping national cybersecurity teams with access to Cyble threat intelligence technologies.  Cyble will provide BOCRA with real-time intelligence on emerging threats, leveraging its proprietary AI-native platforms that monitor malicious activity across the open, deep, and dark web. This advanced situational awareness will help Botswana’s security teams quickly identify risk indicators, detect suspicious activity, and mitigate threats before they escalate. The partnership aims to reduce the impact of cyber incidents on citizens, enterprises, and critical national infrastructure. 

Expanding Cyber Skills and Knowledge Transfer 

Another essential focus area of the Cyble and BOCRA MoU is capacity building. The agreement includes initiatives to enhance cybersecurity skills, support workforce development, and promote knowledge transfer. This is expected to help Botswana establish a sustainable talent pipeline capable of addressing modern cyber risks.  According to Cyble, strengthening human expertise is as crucial as deploying technical solutions. Training programs, workshops, and shared intelligence efforts will support BOCRA and the Botswana National CSIRT in their mandate to safeguard the country’s digital landscape.  Manish Chachada, Co-founder and COO of Cyble, emphasized the importance of this collaboration. “This partnership reflects our continued commitment to supporting national cybersecurity priorities across Africa. By combining Cyble’s threat intelligence expertise with BOCRA’s regulatory leadership, we are confident in our ability to strengthen Botswana’s cyber resilience and help the nation navigate the rapidly evolving threat landscape,” he said. 

About BOCRA 

The Botswana Communications Regulatory Authority serves as the national body responsible for regulating the communications sector, advancing cybersecurity programs, enhancing digital infrastructure resilience, and promoting cyber awareness across the country. As cyber threats grow more complex, BOCRA’s role in coordinating national cyber readiness becomes increasingly critical. 

About Cyble 

Cyble, an AI-first cybersecurity company, is recognized globally for its expertise in dark web intelligence, digital risk protection, and predictive cyber defense. Its platforms process more than 50TB of threat data daily, helping organizations detect, measure, and mitigate risks in real time. Cyble works with Fortune 500 enterprises and government entities worldwide, supporting the shift toward intelligent, autonomous cybersecurity solutions.  The Cyble and BOCRA MoU reinforces the shared vision of both organizations to ensure a safer, more secure digital future for Botswana.  Explore how Cyble’s AI-powered threat intelligence and digital risk protection solutions can help your business stay ahead of emerging risks.  Visit www.cyble.com to learn more. 

ARC Data Sale Scandal: Airlines’ Travel Records Used for Warrantless Surveillance

19 November 2025 at 04:18

ARC Data Sale

The ARC Data Sale to U.S. government agencies has come under intense scrutiny following reports of warrantless access to Americans’ travel records. After growing pressure from lawmakers, the Airlines Reporting Corporation (ARC), a data broker collectively owned by major U.S. airlines, has announced it will shut down its Travel Intelligence Program (TIP), a system that allowed federal agencies to search through hundreds of millions of passenger travel records without judicial oversight.

Lawmakers Question ARC Data Sale and Warrantless Access

Concerns over the ARC Data Sale intensified this week after a bipartisan group of lawmakers sent letters to nine airline CEOs urging them to stop the practice immediately. The letter cited reports that government agencies, including the Department of Homeland Security (DHS), the Internal Revenue Service (IRS), the Securities and Exchange Commission (SEC), and the FBI had been accessing ARC’s travel database without obtaining warrants or court orders. According to the lawmakers, ARC sold access to a system containing approximately 722 million ticket transactions covering 39 months of past and future travel data. This includes bookings made through more than 10,000 U.S.-based travel agencies, popular online travel portals like Expedia, Kayak, and Priceline, and even credit-card reward program bookings. Travel details in this database include a passenger’s name, itinerary, flight numbers, fare details, ticket numbers, and sometimes credit card digits used during the purchase. Documents released through public records requests show that the FBI received travel records from ARC based solely on written requests, bypassing the need for subpoenas. DHS described the database as “an unparalleled intelligence resource.”

IRS Admits Policy Violations in Handling Travel Data

A central point of concern is the revelation that the IRS accessed ARC’s travel database without conducting a legal review or completing a required Privacy Impact Assessment. Under the E-Government Act of 2002, federal agencies must complete such assessments before procuring systems that collect personal data. In a disclosure to Senator Ron Wyden, the IRS admitted it had purchased ARC’s airline data without meeting these requirements. The agency only completed the privacy assessment after receiving an oversight inquiry in 2025. It also confirmed that it had not initially reviewed whether accessing the travel data constituted a search that required a warrant, despite previous commitments to do so after a 2021 investigation into cell-phone location data purchases.

Prospective Surveillance Raises New Privacy Concerns

Beyond historical travel data, lawmakers highlighted that ARC’s tools enabled what they termed “prospective surveillance.” Through automated, recurring searches, government agencies could receive alerts the moment a ticket matching specific criteria was booked. This type of forward-looking monitoring typically requires a higher legal threshold and is allowed only in limited circumstances authorized by Congress. Lawmakers argued that buying such capabilities from a data broker like ARC allowed agencies to circumvent the Fourth Amendment, undermining Americans’ constitutional protection against unreasonable searches. Because ARC only captures bookings made through travel agencies, individuals booking directly with airlines do not have their travel data in the system, effectively creating inconsistent privacy protections based solely on how a ticket is purchased.

ARC Confirms End of Travel Intelligence Program

In a letter sent on Tuesday, ARC CEO Lauri Reishus informed lawmakers that the company would end the Travel Intelligence Program in the coming weeks. The decision follows public and political pressure since September, when media reports first revealed the extent of ARC’s data-sharing arrangements with government agencies. Lawmakers noted that airlines benefit financially when passengers book tickets directly, raising concerns that the surveillance program not only threatened privacy rights but also created potential antitrust implications. As lawmakers push for stronger privacy protections and clearer limits on government surveillance, the ARC data sale case has become a high-profile example of how easily personal travel data can be accessed and shared without passengers’ knowledge.

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirements—a point Judge Stein emphasized—the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAI 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAI’s SearchGPT: A Game Changer or Pandora’s Box for Cybersecurity Pros?

Vinomofo Failed to Protect Customer Data, Australian Privacy Commissioner Rules

30 October 2025 at 08:23

Vinomofo, Privacy Commissioner

Australia's Privacy Commissioner Carly Kind has issued a determination against online wine wholesaler Vinomofo Pty Ltd, finding the company interfered with the privacy of almost one million individuals by failing to take reasonable steps to protect their personal information from security risks.

The determination represents one of the most comprehensive applications of Australian Privacy Principle 11.1 (APP 11.1) to cloud migration projects and provides critical guidance for organizations undertaking similar infrastructure transitions.

The finding follows a 2022 data breach that occurred during a large-scale data migration project, exposing approximately 17GB of data belonging to 928,760 customers and members. The determination goes beyond technical security failures, identifying systemic cultural and governance deficiencies that Commissioner Kind found demonstrated Vinomofo's failure to value or nurture attention to customer privacy.

The Breach: Migration Gone Wrong

In 2022, Vinomofo experienced a data breach amid what the company described as a "large data migration project." An unauthorized third party gained access to the company's database hosted on a testing platform, which, despite being separate from the live website, contained real customer information.

The exposed database held approximately 17GB of data comprising identity information including gender and date of birth, contact information such as names, email addresses, phone numbers, and physical addresses, and financial information. The breach initially came to light when security researcher Troy Hunt flagged the incident on social media, and subsequent investigation revealed the stolen data had been advertised for sale on Russian-language cybercrime forums.

Also read: Wine Company Vinomofo Confirms Data Breach, 500,000 Customers at Risk

The testing platform exposure reveals a fundamental security misconfiguration that has become increasingly common as organizations migrate to cloud infrastructure. Testing and development environments frequently contain production data but receive less rigorous security controls than production systems, creating attractive targets for threat actors who recognize this vulnerability pattern.

Vinomofo's initial public statements downplayed the breach's severity, emphasizing that the company "does not hold identity or financial data such as passports, drivers' licences or credit cards/bank details" and assuring customers that "no passwords, identity documents or financial information were accessed." However, the Privacy Commissioner's investigation revealed more significant failures in the company's security posture and governance.

Privacy as an Afterthought

Perhaps the determination's most significant finding concerns Vinomofo's organizational culture. Commissioner Kind concluded that "Vinomofo's culture and business posture failed to value or nurture attention to customer privacy, as exemplified by failures regarding its policies and procedures, training, and cultural approach to privacy."

This cultural assessment goes beyond technical security measures to examine the organizational prioritization of privacy protection. The Commissioner observed that privacy wasn't embedded into business processes, decision-making frameworks, or corporate values—it remained peripheral rather than fundamental to operations.

The determination identified specific manifestations of this cultural failure:

Policy and Procedure Deficiencies: Vinomofo lacked adequate policies governing data handling during migration projects, security requirements for testing environments, and access controls for sensitive customer information.

Training Inadequacies: The company failed to provide sufficient privacy and security training to personnel involved in data migration and infrastructure management, resulting in preventable errors and oversights.

Cultural Approach: Privacy considerations weren't integrated into strategic planning, risk management, or operational decision-making processes, treating privacy compliance as a checkbox exercise rather than a core business imperative.

Known Risks Ignored

The Commissioner's determination revealed that Vinomofo was aware of deficiencies in its security governance and recognized the need to uplift its security posture at least two years prior to the 2022 incident. This finding transforms the breach from an unfortunate accident into a foreseeable consequence of deliberate inaction.

The determination states: "The respondent was aware of the deficiencies in its security governance and that it needed to uplift its security posture at least 2 years prior to the Incident." This awareness without corresponding action demonstrates a failure of corporate governance that extended beyond the IT security function to board and executive leadership levels.

Organizations face resource constraints and competing priorities that can delay security improvements. However, the Commissioner's finding that Vinomofo knew about security deficiencies for two years before the breach eliminates any claim of unforeseen circumstances. This represents a calculated risk—one that ultimately materialized with consequences for nearly one million customers.

The "Reasonable Steps" Standard

The determination centers on Australian Privacy Principle 11.1, which requires entities holding personal information to take "such steps as are reasonable in the circumstances" to protect that information from misuse, interference, loss, unauthorized access, modification, or disclosure.

The Commissioner concluded that "the totality of steps taken by the respondent were not reasonable in the circumstances" to protect the personal information it held. This holistic assessment examines not individual security controls but the comprehensive security program considering organizational context, threat environment, and data sensitivity.

The determination provides valuable guidance on how "reasonable steps" should be interpreted in the context of data migration projects, particularly when using cloud infrastructure providers. Key considerations include:

Cloud Security Responsibilities: Organizations cannot delegate privacy obligations to cloud service providers. While providers like Amazon Web Services (where Vinomofo hosted its database) offer security features and controls, customers remain responsible for properly configuring and managing those controls.

Testing Environment Security: Testing and development environments containing real customer data must receive security controls commensurate with the sensitivity of that data. The separation from production systems doesn't reduce security obligations when personal information is involved.

Migration Risk Management: Data migration projects create heightened security risks during transition periods when data exists in multiple locations, access patterns change, and configurations evolve. Organizations must implement enhanced controls during migrations to address these elevated risks.

Awareness and Action: Knowing about security deficiencies creates an obligation to address them within reasonable timeframes. Extended delays between identifying risks and implementing mitigations may constitute unreasonable conduct under APP 11.1.

Shared Responsibility Misunderstood

The determination's emphasis on cloud infrastructure provider obligations addresses a widespread misunderstanding of the shared responsibility model that governs cloud security. Cloud providers offer infrastructure and security capabilities, but customers must properly configure and manage those capabilities to protect their data.

Amazon Web Services, where Vinomofo stored the exposed database, provides extensive security features including encryption, access controls, network isolation, and monitoring capabilities. However, these features require proper implementation and configuration by customers. A misconfigured S3 bucket, overly permissive access policies, or inadequate network controls can expose data despite the underlying platform's security capabilities.

The breach appears to have resulted from Vinomofo's configuration and management of its AWS environment rather than vulnerabilities in AWS itself. This pattern has become common in cloud data breaches—organizations migrate to cloud platforms attracted by scalability and cost benefits but lack the expertise or diligence to properly secure their cloud deployments.

For organizations using cloud infrastructure providers, the determination establishes clear expectations:

Configuration Management: Organizations must implement rigorous configuration management processes ensuring security settings align with best practices and data protection requirements.

Access Controls: Cloud environments require carefully designed access control policies following least-privilege principles. The flexibility of cloud platforms can create excessive access if not properly managed.

Monitoring and Detection: Cloud platforms provide extensive logging and monitoring capabilities, but organizations must actively use these capabilities to detect suspicious activity and security misconfigurations.

Expertise Requirements: Securing cloud environments requires specialized knowledge. Organizations must ensure personnel managing cloud infrastructure possess appropriate expertise or engage qualified consultants.

The Remedial Declarations

The Commissioner made several declarations requiring Vinomofo to cease certain acts and practices, though specific details weren't disclosed in the public announcement. These declarations typically include requirements to:

Implement comprehensive information security programs addressing identified deficiencies, conduct regular security assessments and audits of systems handling personal information, provide privacy and security training to relevant personnel, establish privacy governance frameworks with clear accountability and oversight, and review and enhance policies and procedures governing data handling, particularly during migration projects.

The declarations serve multiple purposes beyond Vinomofo's specific case. They provide a roadmap for other organizations undertaking similar cloud migrations or managing customer data at scale. They establish regulatory expectations about minimum acceptable security practices. And they create precedent that future enforcement actions can reference when addressing similar failures.

From Spreadsheets to Strategic Defense: Andrew Morton Walks Us Through TPRM Transformation

16 October 2025 at 14:31

Andrew Morton, Third-Party Risk Management, third-party risk management, TPRM best practices, vendor risk management, ISO 27001 auditor, SOC 2 validation, vendor tiering strategy, fourth-party risk visibility, risk-based vendor assessment, TPRM metrics, vendor onboarding process, sub-processor management, DPA clauses, adaptive questionnaires, vendor security assessment, GRC automation, supply chain risk management, procurement security alignment, independent assurance reports, vendor control validation, critical vendor management, TPRM stakeholder engagement, security questionnaire alternatives, vendor posture scanning, risk classification criteria, TPRM scalability

When Andrew Morton, walked into the office, third-party risk management (TPRM) was a bit all over the place—spreadsheets, generic questionnaires, and vendors assessed identically regardless of whether they handled customer credit cards or office supplies. As an ISO 27001 Lead Auditor who reads the fine print on SOC 2 reports, Morton saw an opportunity to rebuild from the ground up. In this wide-ranging conversation, he reveals the three design choices that matter most, explains why executives glaze over at "questionnaires completed" metrics, and shares his biggest red flag when vetting new vendors. From fourth-party visibility to the most misunderstood clause in modern data processing agreements, Morton offers a masterclass in making TPRM both scalable and defensible. Edited excerpts of Andrew Morton's interview below: 
From Spreadsheets to Scale
"Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value."
What was the inflection point that forced you to re-architect TPRM at Chemist Warehouse, and what did your “target operating model” look like on day 1 vs. today?
AM: Honestly, the inflection point was when I joined the company. It was clear from day one that our third-party risk management wasn’t fit for purpose - it was inconsistent, reactive, and lacked a defensible framework. Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value. I saw an opportunity to shift the program into something risk-based, scalable, and aligned with industry standards so that leadership could have real confidence in our vendor ecosystem.
Design Choices that Mattered Most
"Vendor tiering comes first because it’s the foundation - without knowing which vendors are critical, you can’t allocate resources intelligently."
If you could only keep three design decisions in your TPRM stack—continuous external scanning, adaptive questionnaires, or vendor tiering—what stays and why?
AM: Vendor tiering comes first because it’s the foundation - without knowing which vendors are critical, you can’t allocate resources intelligently. It’s what ensures high-risk providers get deep scrutiny while low-risk vendors don’t bog down the team. Adaptive questionnaires come next. They let us dig deeper only when the risk indicators justify it, which makes the process scalable and keeps the business engaged instead of frustrated by generic questionnaires. Independent assurance reports (SOC 2, ISO 27001, PCI, etc.) are my third choice because they let us leverage established, externally validated audits. They give us confidence in a vendor’s baseline controls without reinventing the wheel, and they free up capacity to focus on real risk areas. I’d actually put continuous external scanning just behind those three. It’s valuable, but without tiering, adaptive assessments, and assurance reports, scanning can generate noise without context. The three I chose give me a defensible, risk-based foundation - everything else builds on top of that.
Fourth-Party Visibility that Actually Works
"When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors."
How deep do you go on your vendors’ vendors? What’s your minimum viable view (e.g., critical sub-processors list, region & data-type mapping, alerting on material changes), and how do you enforce it contractually?
AM: When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors. My minimum viable view includes knowing who those sub-processors are, what regions they operate in, the types of data they handle, and being alerted to any material changes. Just as importantly, I look at whether the vendor has a mature third-party risk assessment process of their own, because I want assurance they’re applying the same standards downstream that we expect from them.
Pre-Production Gates
"Sometimes scanning surfaces outdated domains or low-value assets."
You’ve talked about passive scanning in your earlier conversations. What’s your “go/no-go” policy for a new SaaS vendor if external posture looks weak but the business is pushing?
AM: Passive scanning is a useful early signal, but it’s not an automatic no-go. If a vendor’s external posture looks weak, my first step is to validate with them - sometimes scanning surfaces outdated domains or low-value assets. If it’s confirmed, we take a risk-based approach: for critical vendors, weak posture is a red flag that may pause or even stop onboarding until compensating controls or remediation commitments are in place. For lower-tier vendors, we may accept the risk with conditions - for example, requiring stronger internal controls on our side or limiting the data shared. The no-go line is when the vendor is both critical to operations and unwilling to address or evidence improvements. At that point, I’d escalate to leadership with a clear risk statement: ‘Here’s what the business wants, here’s the security posture, here are the potential consequences.’ That way, the decision is transparent and defensible, even if it means saying no.
Beyond Time-to-Assess
"When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure."
You have spoken about cutting assessment time dramatically—great. Which risk metrics resonated most with execs (e.g., % critical vendors with open highs >30 days, time-to-remediate by tier, control coverage drift), and which fell flat?
AM: When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure. Things like the percentage of critical vendors with open high-severity findings older than 30 days, or the risk level by tier, gave them a clear view of where risk was lingering and whether vendors were responsive. What fell flat were the more operational or technical metrics - things like the number of questionnaires sent. That’s important to know for us internally for running the program, but executives tune out because this doesn’t translate to risk or business impact. The key is to frame metrics around exposure and risk.
Assurance You Actually Trust
"When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I treat assurance reports as one input, not a guarantee."
You are an ISO 27001 Lead Auditor/Implementer, so, when a vendor presents an ISO cert or SOC 2, what do you verify beyond the badge—scope boundaries, carve-outs, sampling, last major NCs?
AM: When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I go deeper into the scope boundaries - does the certification actually cover the systems and services we’re relying on, or just a data center or narrow business unit? I also look closely at carve-outs and exclusions - for example, if key cloud services or sub-processors aren’t covered, that’s a material gap. With SOC 2, I review the sampling approach and the audit period to make sure the testing was meaningful, not just point-in-time or limited in coverage. Finally, I always check whether there were any major non-conformities or exceptions noted, and how they were closed out. In short, I treat assurance reports as one input, not a guarantee - the detail behind the badge tells me whether I can rely on it or whether I need to dig deeper.
Shifting Culture, Not Just Tools
"I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal."
What did you learn about stakeholder change—procurement, legal, store ops—when you rolled out the new TPRM model? If you had to repeat it post-merger, what would you do differently?
AM: Rolling out the new TPRM model reinforced that every stakeholder has different priorities and perspectives. But the underlying purpose is the same: to protect the business from risk while enabling it to operate effectively. If I had to do it again, I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal. That alignment makes adoption smoother and ensures that, despite different lenses, everyone’s working toward the same outcome.
Vendor Onboarding Efficiency
"We shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny."
What are the biggest challenges you see when onboarding new third parties at scale, and how have you streamlined that process without slowing down the business?
AM: The biggest challenges in onboarding third parties at scale are consistency, visibility, and speed. Every business unit wants to go live with their vendor yesterday, so security can sometimes be seen as slowing things down. You don’t want to treat all vendors the same, because that overwhelms the process and creates bottlenecks. To streamline, we shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny. We also built in early checkpoints with procurement and legal, so security isn’t a last-minute hurdle. That’s allowed us to reduce onboarding friction, keep the business moving, and still be confident we’re focusing our effort where it matters most.
Building Risk Tiers that Make Sense
"A vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower."
How do you classify vendors into critical, high, medium, and low-risk tiers in practice, and what criteria have proven most reliable in your experience?
AM: We classify vendors into risk tiers using a structured model - for us it’s tiers 1 through 5. The criteria that have proven the most reliable are:
  • Data classification - what types of data the vendor stores or accesses, especially sensitive or regulated data like PI/SI.
  • System and infrastructure access - whether they interface with or have privileged access to our core/critical applications or infrastructure.
  • Regulatory and contractual obligations - if the vendor falls under specific regimes like PCI, GDPR, or local privacy laws, they’re automatically in a higher tier), and
  • Business criticality - whether their failure could materially disrupt operations or customer experience.
These inputs together determine the tier. So, a vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower. This approach means we can defend our decisions, scale assessments, and ensure critical vendors get proportionate scrutiny without overwhelming the business.
Balancing Questionnaires with Evidence
"Self-attestation questionnaires are useful for coverage and efficiency - they give us a first view across the vendor landscape."
How do you strike the balance between using self-attestation questionnaires versus validating controls with independent evidence when assessing third parties?
AM: For me it’s about balance and proportionality. Self-attestation questionnaires are useful for coverage and efficiency - they give us a first view across the vendor landscape. But on their own they’re not reliable, especially for higher-tier vendors. That’s where independent evidence comes in - things like SOC 2 reports and/or ISO27001 certificates. Lower-tier vendors may only need to self-attest, mid-tier vendors provide self-attestation plus some supporting documentation, and higher-tier vendors must back it up with independent evidence. That way we scale the program, but still get defensible assurance where it matters most.
Collaboration with Procurement and Legal
"Procurement is on the front line. Legal ensures the right protections are baked into contracts."
What role do procurement and legal teams play in strengthening third-party risk management, and how do you foster alignment across these functions?
AM: Procurement and legal are key to making TPRM effective. Procurement is on the front line - they’re the ones who see new vendors first, so they help us embed risk assessments early instead of security being a last-minute hurdle. Legal ensures the right protections are baked into contracts - breach notification, sub-processor transparency, audit rights, data handling requirements. One of the things we’ve done to foster alignment is that we’ve created a simple flow chart that maps who does what, and when. By framing it as a shared purpose rather than separate processes, we’ve been able to work as one team.
Communicating Risk to the Board
My focus is always on clarity, and consequence so risks map directly to business impact.
When reporting to senior leadership or the board, how do you frame third-party and supply-chain risks in terms they find most actionable?
AM: I try to frame third-party risk for leadership in terms of business outcomes - like regulatory exposure, business disruption, or reputational harm - rather than telling them technical details. My focus is always on clarity, and consequence so risks map directly to business impact – that’s what tends to land or where the conversation will naturally want to go.
Lessons Learned from Scaling
"You can’t assess everyone the same way - tiering and a risk-based approach are critical to avoid bottlenecks."
What were the biggest lessons you learned while scaling third-party risk management across hundreds of vendors, and what advice would you give to organizations just starting that journey?
AM: The biggest lesson I learned scaling TPRM across hundreds of vendors is that you can’t assess everyone the same way - tiering and a risk-based approach are critical to avoid bottlenecks. Another was that stakeholder alignment matters as much as tools or processes. Procurement, legal, and the business all need to see TPRM as an enabler, not a blocker. Finally, I learned that while automation and adaptive questionnaires save time, you still need independent assurance like SOC 2 reports or ISO27001 certifications to validate. My advice to those starting out is to begin with a clear tiering model, early stakeholder buy-in, and simple, scalable processes - you can add sophistication later, but without those foundations, you’ll struggle at scale.
Looking Ahead in GRC
"Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation."
How do you see the discipline of GRC itself evolving over the next three to five years, especially with increasing automation and AI support?
AM: I see GRC evolving into a more automated, insight-driven discipline over the next three to five years. Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation, freeing teams to focus on strategic risk decisions and exception management. I also expect GRC to become more integrated across the enterprise, connecting IT, compliance, privacy, and third-party risk so decisions are informed by real-time data. Ultimately, the value will shift from just checking boxes to providing actionable insights that help the business make informed, risk-aware decisions faster.
Rapid Fire
One vendor control you’d mandate tomorrow if you could.
AM: If I could mandate one vendor control tomorrow, it would be multi-factor authentication, especially for all administrative and privileged access. It’s a simple but highly effective control that dramatically reduces the likelihood of account compromise, applies across all vendor types, and immediately strengthens our security posture without adding unnecessary complexity.
One metric you’d delete from TPRM dashboards.
AM: If I could remove one metric from TPRM dashboards, it would be the number of questionnaires sent or completed. It’s useful internally to show the volume of work and the team’s effort, but it doesn’t actually reflect risk or control effectiveness. Executives respond better to metrics tied to business impact - like open high-severity findings - because that’s what drives informed decisions.
Most misunderstood clause in modern DPAs.
AM: The most misunderstood clause in modern DPAs in my opinion is typically the sub-processor notification and approval section. Misalignment here can introduce downstream risks, especially for critical data or cross-border processing, so it’s important to clarify expectations up front and ensure the clause is actionable, not just boilerplate.
Your “Red Flag” in a vendor’s first 5 minutes.
AM: Beyond transparency, the other key red flag I watch for is reluctance to commit contractually to basic security obligations - like notifying us of sub-processor changes or breaches. If a vendor hesitates on these points, it can signal deeper gaps in controls or governance, and it prompts a much closer review before proceeding.
❌