โŒ

Normal view

Received yesterday โ€” 12 December 2025

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we havenโ€™t made trustworthy. We canโ€™t. And todayโ€™s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The post Building Trustworthy AI Agents appeared first on Security Boulevard.

Building Trustworthy AI Agents

12 December 2025 at 07:00

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we havenโ€™t made trustworthy. We canโ€™t. And todayโ€™s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions.

These arenโ€™t edge cases. Theyโ€™re the result of building AI systems without basic integrity controls. Weโ€™re in the third leg of data securityโ€”the old CIA triad. Weโ€™re good at availability and working on confidentiality, but weโ€™ve never properly solved integrity. Now AI personalization has exposed the gap by accelerating the harms.

The scope of the problem is large. A good AI assistant will need to be trained on everything we do and will need access to our most intimate personal interactions. This means an intimacy greater than your relationship with your email provider, your social media account, your cloud storage, or your phone. It requires an AI system that is both discreet and trustworthy when provided with that data. The system needs to be accurate and complete, but it also needs to be able to keep data private: to selectively disclose pieces of it when required, and to keep it secret otherwise. No current AI system is even close to meeting this.

To further development along these lines, I and others have proposed separating usersโ€™ personal data stores from the AI systems that will use them. It makes sense; the engineering expertise that designs and develops AI systems is completely orthogonal to the security expertise that ensures the confidentiality and integrity of data. And by separating them, advances in security can proceed independently from advances in AI.

What would this sort of personal data store look like? Confidentiality without integrity gives you access to wrong data. Availability without integrity gives you reliable access to corrupted data. Integrity enables the other two to be meaningful. Here are six requirements. They emerge from treating integrity as the organizing principle of security to make AI trustworthy.

First, it would be broadly accessible as a data repository. We each want this data to include personal data about ourselves, as well as transaction data from our interactions. It would include data we create when interacting with othersโ€”emails, texts, social media postsโ€”and revealed preference data as inferred by other systems. Some of it would be raw data, and some of it would be processed data: revealed preferences, conclusions inferred by other systems, maybe even raw weights in a personal LLM.

Second, it would be broadly accessible as a source of data. This data would need to be made accessible to different LLM systems. This canโ€™t be tied to a single AI model. Our AI future will include many different modelsโ€”some of them chosen by us for particular tasks, and some thrust upon us by others. We would want the ability for any of those models to use our data.

Third, it would need to be able to prove the accuracy of data. Imagine one of these systems being used to negotiate a bank loan, or participate in a first-round job interview with an AI recruiter. In these instances, the other party will want both relevant data and some sort of proof that the data are complete and accurate.

Fourth, it would be under the userโ€™s fine-grained control and audit. This is a deeply detailed personal dossier, and the user would need to have the final say in who could access it, what portions they could access, and under what circumstances. Users would need to be able to grant and revoke this access quickly and easily, and be able to go back in time and see who has accessed it.

Fifth, it would be secure. The attacks against this system are numerous. There are the obvious read attacks, where an adversary attempts to learn a personโ€™s data. And there are also write attacks, where adversaries add to or change a userโ€™s data. Defending against both is critical; this all implies a complex and robust authentication system.

Sixth, and finally, it must be easy to use. If weโ€™re envisioning digital personal assistants for everybody, it canโ€™t require specialized security training to use properly.

Iโ€™m not the first to suggest something like this. Researchers have proposed a โ€œHuman Context Protocolโ€ (https://papers.ssrn.com/sol3/ papers.cfm?abstract_id=5403981) that would serve as a neutral interface for personal data of this type. And in my capacity at a company called Inrupt, Inc., I have been working on an extension of Tim Berners-Leeโ€™s Solid protocol for distributed data ownership.

The engineering expertise to build AI systems is orthogonal to the security expertise needed to protect personal data. AI companies optimize for model performance, but data security requires cryptographic verification, access control, and auditable systems. Separating the two makes sense; you canโ€™t ignore one or the other.

Fortunately, decoupling personal data stores from AI systems means security can advance independently from performance (https:// ieeexplore.ieee.org/document/ 10352412). When you own and control your data store with high integrity, AI canโ€™t easily manipulate you because you see what data itโ€™s using and can correct it. It canโ€™t easily gaslight you because you control the authoritative record of your context. And you determine which historical data are relevant or obsolete. Making this all work is a challenge, but itโ€™s the only way we can have trustworthy AI assistants.

This essay was originally published in IEEE Security & Privacy.

Received before yesterday

Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks

11 December 2025 at 13:27

Modern internet users navigate an increasingly fragmented digital ecosystem dominated by countless applications, services, brands and platforms. Engaging with online offerings often requires selecting and remembering passwords or taking other steps to verify and protect oneโ€™s identity. However, following best practices has become incredibly challenging due to various factors. Identifying Digital Identity Management Problems in..

The post Identity Management in the Fragmented Digital Ecosystem: Challenges and Frameworks appeared first on Security Boulevard.

Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach

8 December 2025 at 00:16
food stamp fraud, Geofence, warrant, enforcement, DOJ AI crime

The Washington Post last month reported it was among a list of data breach victims of the Oracle EBS-related vulnerabilities, with a threat actor compromising the data of more than 9,700 former and current employees and contractors. Now, a former worker is launching a class-action lawsuit against the Post, claiming inadequate security.

The post Ex-Employee Sues Washington Post Over Oracle EBS-Related Data Breach appeared first on Security Boulevard.

China Hackers Using Brickstorm Backdoor to Target Government, IT Entities

5 December 2025 at 17:36
china, flax typhoon,

Chinese-sponsored groups are using the popular Brickstorm backdoor to access and gain persistence in government and tech firm networks, part of the ongoing effort by the PRC to establish long-term footholds in agency and critical infrastructure IT environments, according to a report by U.S. and Canadian security offices.

The post China Hackers Using Brickstorm Backdoor to Target Government, IT Entities appeared first on Security Boulevard.

Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps

4 December 2025 at 10:54
Google, Wiz, Cnapp, Exabeam, CNAPP, cloud threat, detections, threats, CNAP, severless architecture, itte Broadcom report cloud security threat

Security and developer teams are scrambling to address a highly critical security flaw in frameworks tied to the popular React JavaScript library. Not only is the vulnerability, which also is in the Next.js framework, easy to exploit, but React is widely used, including in 39% of cloud environments.

The post Dangerous RCE Flaw in React, Next.js Threatens Cloud Environments, Apps appeared first on Security Boulevard.

Air fryer app caught asking for voice data (re-air) (Lock and Code S06E24)

2 December 2025 at 11:22

This week on the Lock and Code podcastโ€ฆ

Itโ€™s often said online that if a product is free, youโ€™re the product, but what if that bargain was no longer true? What if, depending on the device you paid hard-earned money for, you still became a product yourself, to be measured, anonymized, collated, shared, or sold, often away from view?

In 2024, a consumer rights group out of the UK teased this new reality when it published research into whether peopleโ€™s air fryersโ€”seriouslyโ€“might be spying on them.

By analyzing the associatedย Androidย apps for three separate air fryer models from three different companies, researchers learned that these kitchen devices didnโ€™t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastriesโ€”they also wanted a lot of user data, from precise location to voice recordings from a userโ€™s phone.

As the researchers wrote:

โ€œIn the air fryer category, as well as knowing customersโ€™ precise location, all three products wanted permission to record audio on the userโ€™s phone, for no specified reason.โ€

Bizarrely, these types of data requests are far from rare.

Today, on the Lock and Code podcast, we revisit a 2024ย episode in which host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.

These stories arenโ€™t about mass government surveillance, and theyโ€™re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: โ€œSpellboundโ€ by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: โ€œGood Godโ€ by Wowa (unminus.com)


Listen upโ€”Malwarebytes doesnโ€™t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with ourย exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Australian Man Gets 7 Years for โ€˜Evil Twinโ€™ WiFi Attacks

1 December 2025 at 12:38

Australian evil twin wifi attack

An Australian man has been sentenced to more than seven years in jail on charges that he created โ€˜evil twinโ€™ WiFi networks to hack into womenโ€™s online accounts to steal intimate photos and videos. The Australian Federal Police (AFP) didnโ€™t name the man in announcing the sentencing, but several Australian news outlets identified him as Michael Clapsis, 44, of Perth, an IT professional who allegedly used his skills to carry out the attacks. He was sentenced to seven years and four months in Perth District Court on November 28, and will be eligible for parole after serving half that time, according to the Sydney Morning Herald. The AFP said Clapsis pled guilty to 15 charges, ranging from unauthorised access or modification of restricted data to unauthorised impairment of electronic communication, failure to comply with an order, and attempted destruction of evidence, among other charges.

โ€˜Evil Twinโ€™ WiFi Network Detected on Australian Domestic Flight

The AFP investigation began in April 2024, when an airline reported that its employees had identified a suspicious WiFi network mimicking a legitimate access point โ€“ known as an โ€œevil twinโ€ โ€“ during a domestic flight. On April 19, 2024, AFP investigators searched the manโ€™s luggage when he arrived at Perth Airport , where they seized a portable wireless access device, a laptop and a mobile phone.โ€ฏThey later executed a search warrant โ€œat a Palmyra home.โ€ Forensic analysis of data and seized devices โ€œidentified thousands of intimate images and videos, personal credentials belonging to other people, and records of fraudulent WiFi pages,โ€ the AFP said. The day after the search warrant, the man deleted more than 1,700 items from his account on a data storage application and โ€œunsuccessfully tried to remotely wipe his mobile phone,โ€ the AFP said. Between April 22 and 23, 2024, the AFP said the man โ€œused a computer software tool to gain access to his employerโ€™s laptop to access confidential online meetings between his employer and the AFP regarding the investigation.โ€ The man allegedly used a portable wireless access device, called a โ€œWiFi Pineapple,โ€ to detect device probe requests and instantly create a network with the same name. A device would then connect to the evil twin network automatically. The network took people to a webpage and prompted them to log in using an email or social media account, where their credentials were then captured. AFP said its cybercrime investigators identified data related to use of the fraudulent WiFi pages at airports in Perth, Melbourne and Adelaide, as well as on domestic flights, โ€œwhile the man also used his IT privileges to access restricted and personal data from his previous employment.โ€ โ€œThe man unlawfully accessed social media and other online accounts linked to multiple unsuspecting women to monitor their communications and steal private and intimate images and videos,โ€ the AFP said.

Victims of Evil Twin WiFi Attack Enter Statements

At the sentencing, a prosecutor read from emotional impact statements from the manโ€™s victims, detailing the distress they suffered and the enduring feelings of shame and loss of privacy. One said, โ€œI feel like I have eyes on me 24/7,โ€ according to the Morning Herald. Another said, โ€œThoughts of hatred, disgust and shame have impacted me severely. Even though they were only pictures, they were mine not yours.โ€ The paper said Clapsisโ€™ attorney told the court that โ€œHeโ€™s sought to seek help, to seek insight, to seek understanding and address his way of thinking.โ€ The case highlights the importance of avoiding free public WiFi when possible โ€“ and not accessing sensitive websites or applications if one must be used. Any network that requests personal details should be avoided. โ€œIf you do want to use public WiFi, ensure your devices are equipped with a reputable virtual private network (VPN) to encrypt and secure your data,โ€ the AFP said. โ€œDisable file sharing, donโ€™t use things like online banking while connected to public WiFi and, once you disconnect, change your device settings to โ€˜forget networkโ€™.โ€

Cybersecurity Coalition to Government: Shutdown is Over, Get to Work

28 November 2025 at 13:37
budget open source supply chain cybersecurity ransomware White House Cyber Ops

The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.

The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.

French Regulator Fines Vanity Fair Publisher โ‚ฌ750,000 for Persistent Cookie Consent Violations

28 November 2025 at 05:49

Vanity Fair, Condรฉ Nast, Cookie Consent

France's data protection authority discovered that when visitors clicked the button to reject cookies on Vanity Fair (vanityfair[.]fr), the website continued placing tracking technologies on their devices and reading existing cookies without consent, a violation that now costs publisher Les Publications Condรฉ Nast โ‚ฌ750,000 in fines six years after privacy advocate NOYB first filed complaints against the media company.

The November 20 sanction by CNIL's restricted committee marks the latest enforcement action in France's aggressive campaign to enforce cookie consent requirements under the ePrivacy Directive.

NOYB, the European privacy advocacy organization led by Max Schrems, filed the original public complaint in December 2019 concerning cookies placed on user devices by the Vanity Fair France website. After multiple investigations and discussions with CNIL, Condรฉ Nast received a formal compliance order in September 2021, with proceedings closed in July 2022 based on assurances of corrective action.

Repeated Violations Despite Compliance Order

CNIL conducted follow-up online investigations in July and November 2023, then again in February 2025, discovering that the publisher had failed to implement compliant cookie practices despite the earlier compliance order. The restricted committee found Les Publications Condรฉ Nast violated obligations under Article 82 of France's Data Protection Act across multiple dimensions.

Investigators discovered cookies requiring consent were placed on visitors' devices as soon as they arrived on vanityfair.fr, even before users interacted with the information banner to express a choice. This automatic placement violated fundamental consent requirements mandating that tracking technologies only be deployed after users provide explicit permission.

The website lacked clarity in information provided to users about cookie purposes. Some cookies appeared categorized as "strictly necessary" and therefore exempt from consent obligations, but useful information about their actual purposes remained unavailable to visitors. This misclassification potentially allowed the publisher to deploy tracking technologies under false pretenses.

Most significantly, consent refusal and withdrawal mechanisms proved completely ineffective. When users clicked the "Refuse All" button in the banner or attempted to withdraw previously granted consent, new cookies subject to consent requirements were nevertheless placed on their devices while existing cookies continued being read.

Escalating French Enforcement Actions

The fine amount takes into account that Condรฉ Nast had already been issued a formal notice in 2021 but failed to correct its practices, along with the number of people affected and various breaches of rules protecting users regarding cookies.

The CNIL fine represents another in a series of NOYB-related enforcement actions, with the French authority previously fining Criteo โ‚ฌ40 million in 2023 and Google โ‚ฌ325 million earlier in 2025. Spain's AEPD issued a โ‚ฌ100,000 fine against Euskaltel in related NOYB litigation.

Also read: Google Slapped with $381 Million Fine in France Over Gmail Ads, Cookie Consent Missteps

According to reports, Condรฉ Nast acknowledged violations in its defense but cited technical errors, blamed the Internet Advertising Bureau's Transparency and Consent Framework for misleading information, and stated the cookies in question fall under the functionality category. The company claimed good faith and cooperative efforts while arguing against public disclosure of the sanction.

The Cookie Consent Conundrum

French enforcement demonstrates the ePrivacy Directive's teeth in protecting user privacy. CNIL maintains material jurisdiction to investigate and sanction cookie operations affecting French users, with the GDPR's one-stop-shop mechanism not applying since cookie enforcement falls under separate ePrivacy rules transposed into French law.

The authority has intensified actions against dark patterns in consent mechanisms, particularly practices making cookie acceptance easier than refusal. Previous CNIL decisions against Google and Facebook established that websites offering immediate "Accept All" buttons must provide equivalent simple mechanisms for refusing cookies, with multiple clicks to refuse constituting non-compliance.

The six-year timeline from initial complaint to final sanction illustrates both the persistence required in privacy enforcement and the extended timeframes companies exploit while maintaining non-compliant practices generating advertising revenue through unauthorized user tracking.

FBI: Account Takeover Scammers Stole $262 Million this Year

26 November 2025 at 16:51
hacker, scam, Email, fraud, scam fraud

The FBI says that account takeover scams this year have resulted in 5,100-plus complaints in the U.S. and $262 million in money stolen, and Bitdefender says the combination of the growing number of ATO incidents and risky consumer behavior is creating an increasingly dangerous environment that will let such fraud expand.

The post FBI: Account Takeover Scammers Stole $262 Million this Year appeared first on Security Boulevard.

Russian-Backed Threat Group Uses SocGholish to Target U.S. Company

26 November 2025 at 11:10
russian, Russia Microsoft phishing AWS Ukraine

The Russian state-sponsored group behind the RomCom malware family used the SocGholish loader for the first time to launch an attack on a U.S.-based civil engineering firm, continuing its targeting of organizations that offer support to Ukraine in its ongoing war with its larger neighbor.

The post Russian-Backed Threat Group Uses SocGholish to Target U.S. Company appeared first on Security Boulevard.

The Latest Shai-Hulud Malware is Faster and More Dangerous

25 November 2025 at 16:17
supply chains, audits, configuration drift, security, supply, chain, Blue Yonder, secure, Checkmarx Abnormal Security cyberattack supply chain cybersecurity

A new iteration of the Shai-Hulud malware that ran through npm repositories in September is faster, more dangerous, and more destructive, creating huge numbers of malicious repositories, compromised scripts, and GitHub users attacked, creating one of the most significant supply chain attacks this year.

The post The Latest Shai-Hulud Malware is Faster and More Dangerous appeared first on Security Boulevard.

Attackers are Using Fake Windows Updates in ClickFix Scams

24 November 2025 at 21:40
Lumma, infostealer RATs Reliaquest

Huntress threat researchers are tracking a ClickFix campaign that includes a variant of the scheme in which the malicious code is hidden in the fake image of a Windows Update and, if inadvertently downloaded by victims, will deploy the info-stealing malware LummaC2 and Rhadamanthys.

The post Attackers are Using Fake Windows Updates in ClickFix Scams appeared first on Security Boulevard.

Hack of SitusAMC Puts Data of Financial Services Firms at Risk

24 November 2025 at 13:00
stolen, credentials, file data, anomaly detection, data exfiltration, threat, inside-out, breach, security strategy, data breaches, data search, Exabeam, data, data breaches, clinical trials, breach, breaches, data, residency, sovereignty, data, breaches, data breaches, NetApp data broker FTC location data

SitusAMC, a services provider with clients like JP MorganChase and Citi, said its systems were hacked and the data of clients and their customers possibly compromised, sending banks and other firms scrambling. The data breach illustrates the growth in the number of such attacks on third-party providers in the financial services sector.

The post Hack of SitusAMC Puts Data of Financial Services Firms at Risk appeared first on Security Boulevard.

U.S., International Partners Target Bulletproof Hosting Services

22 November 2025 at 22:36
disney, code, data, API security ransomware extortion shift

Agencies with the US and other countries have gone hard after bulletproof hosting services providers this month, including Media Land, Hypercore, and associated companies and individuals, while the FiveEyes threat intelligence alliance published BPH mitigation guidelines for ISPs, cloud providers, and network defenders.

The post U.S., International Partners Target Bulletproof Hosting Services appeared first on Security Boulevard.

Salesforce: Some Customer Data Accessed via Gainsight Breach

22 November 2025 at 12:43
Microsoft Windows malware software supply chain

An attack on the app of CRM platform-provider Gainsight led to the data of hundreds of Salesforce customers being compromised, highlighting the ongoing threats posed by third-party software in SaaS environments and illustrating how one data breach can lead to others, cybersecurity pros say.

The post Salesforce: Some Customer Data Accessed via Gainsight Breach appeared first on Security Boulevard.

SEC Dismisses Remains of Lawsuit Against SolarWinds and Its CISO

21 November 2025 at 15:52
SolarWinds supply chain cybersecurity Unisys Avaya Check Point Mimecast fines

The SEC dismissed the remain charges in the lawsuit filed in 2023 against software maker SolarWinds and CISO Timothy Brown in the wake of the massive Sunburst supply chain attack, in which a Russian nation-state group installed a malicious update into SolarWInds software that then compromised the systems of some customers.

The post SEC Dismisses Remains of Lawsuit Against SolarWinds and Its CISO appeared first on Security Boulevard.

The Data Privacy Risk Lurking in Paperless Government

18 November 2025 at 10:57

The world is becoming increasingly paperless, and most organizations, including federal agencies, are following suit. Switching from paper-based processes to digital ones offers great benefits. However, the security and compliance challenges that come with this shift arenโ€™t to be taken lightly. As the federal government goes paperless to cut costs and modernize operational processes, a..

The post The Data Privacy Risk Lurking in Paperless Government appeared first on Security Boulevard.

Google Uses Courts, Congress to Counter Massive Smishing Campaign

16 November 2025 at 12:05

Google is suing the Smishing Triad group behind the Lighthouse phishing-as-a-service kit that has been used over the past two years to scam more than 1 million people around the world with fraudulent package delivery or EZ-Pass toll fee messages and stealing millions of credit card numbers. Google also is backing bills in Congress to address the threat.

The post Google Uses Courts, Congress to Counter Massive Smishing Campaign appeared first on Security Boulevard.

Conduent Faces Financial Hit, Lawsuits from Breach Affecting 10.5 Million

14 November 2025 at 22:58
data pipeline, blindness, data blindness, compliance,data, governance, framework, companies, privacy, databases, AWS, UnitedHealth ransomware health care UnitedHealth CISO

The intrusion a year ago into Conduent Business Solutions' systems, likely by the SafePay ransomware group, that affected more than 10.5 individuals will likely cost the company more than $50 million in related expenses and millions more to settle the lawsuits that are piling up.

The post Conduent Faces Financial Hit, Lawsuits from Breach Affecting 10.5 Million appeared first on Security Boulevard.

127 Groups Oppose Changes to GDPR, EU Data Protection Laws

14 November 2025 at 16:39

127 Groups Oppose Changes to GDPR, EU Data Protection Laws

A coalition of 127 civil society organizations and trade unions have banded together to oppose proposed changes that they warn could severely weaken EU data protection and privacy laws like GDPR. In an open letter released this week, the groups expressed โ€œserious alarm at the forthcoming EU Digital Omnibus proposals, part of a wide deregulation agenda. What is being presented as a โ€˜technical streamliningโ€™ of EU digital laws is, in reality, an attempt to covertly dismantle Europe's strongest protections against digital threats. โ€œThese are the protections that keep everyoneโ€™s data safe, governments accountable, protect people from having artificial intelligence (AI) systems decide their life opportunities, and ultimately keep our societies free from unchecked surveillance,โ€ the groups added. Many of the same groups expressed concerns about the Digital Omnibus process earlier this year, but with a comprehensive proposal expected from the European Commission next week and reports that drafts of the legislation would significantly weaken GDPR and other privacy protections, the groups are stepping up their efforts.

GDPR, AI Rules Could Be Weakened in Digital Omnibus Process

Netzpolitik said that GDPR and other protections in several areas would be โ€œsignificantly reduced to allow for greater data usageโ€ under the Digital Omnibus proposals, including making it easier to train AI systems with personal data. Online tracking and cookie restrictions would also be weakened. โ€œStoring and reading non-essential cookies on users' devices would no longer be permitted only with their consent,โ€ Netzpolitik said. โ€œInstead, the full range of legal bases offered by the GDPR would be opened up. This includes the legitimate interests of website operators and tracking companies. Users would then only have the option of opting out retroactively.โ€ Article 9 of the GDPR concerning special categories of data would also be targeted. Article 9 offers special protection for data that includes "ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership." It also includes the processing of genetic data, biometric data for identification purposes, health data, and data about a person's sex life or orientation. โ€œThe Commission aims to define sensitive data more narrowly,โ€ Netzpolitik said. โ€œOnly data that explicitly reveals the aforementioned information would then be afforded special protection. This means that if, for example, a person indicates their sexual orientation in a selection field, this would still be afforded special protection. However, if a data processor infers a person's presumed sexual orientation based on perceived interests or characteristics, current restrictions would no longer apply.โ€ Protections for genetic and biometric data are more likely to remain unchanged โ€œdue to their unique and specific characteristics."

Groups Decry โ€˜Rushed and Opaqueโ€™ Process

The 127 civil society groups and trade unions charged that the Digital Omnibus process โ€œis being done under the radar, using rushed and opaque processes designed to avoid democratic oversight.โ€ The same approach has been used with other Omnibus proposals with damaging results, they said. โ€œAs a result, supposedly minimal changes under the guise of โ€˜simplificationโ€™ have already jeopardised Europeโ€™s core social and environmental protections,โ€ they said. The Digital Omnibus, they said, will reportedly weaken โ€œthe only clear rule that stops companies and governments from constantly tracking what people do on their devices, part of the ePrivacy framework. This will make it a lot easier for those in power to control peopleโ€™s phones, cars or smart homes, while also revealing sensitive information about where people go, and with whom.โ€ EU AI rules could also be weakened, the groups said, including guardrails to ensure โ€œthat AI is developed safely and without discrimination, as well as delaying key elements like penalties for selling dangerous AI systems.โ€ Currently, AI tools that could affect important decisions like whether people can obtain benefits must register in a public database. Under the proposed changes, they said, โ€œthose providing AI tools could unilaterally and secretly exempt themselves from all obligations โ€“ and neither the public nor authorities would know.โ€ โ€œBy recasting vital laws like the GDPR, ePrivacy, AI Act, DSA, DMA, Open Internet Regulation (DNA), Corporate Sustainability Due Diligence Directive and other crucial laws as โ€˜red tapeโ€™, the EU is giving in to powerful corporate and state actors who oppose the principles of a fair, safe and democratic digital landscape and who want to lower the bar of EU laws for their own benefit,โ€ they charged. They urged the European Commission to stop any attempts to reopen the GDPR, ePrivacy framework, AI Act and other โ€œcore digital rights protections.โ€

ShinyHunters Compromises Legacy Cloud Storage System of Checkout.com

14 November 2025 at 15:15
National Public Data breach lawsuit

Checkout.com said the notorious ShinyHunters threat group breached a badly decommissioned legacy cloud storage system last used by the company in 2020 and stole some merchant data. The hackers demanded a ransom, but the company instead will give the amount demanded to cybersecurity research groups.

The post ShinyHunters Compromises Legacy Cloud Storage System of Checkout.com appeared first on Security Boulevard.

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirementsโ€”a point Judge Stein emphasizedโ€”the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risksโ€”such as threats to someoneโ€™s life, plans to harm others, or cybersecurity threatsโ€”may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAIย 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAIโ€™s SearchGPT: A Game Changer or Pandoraโ€™s Box for Cybersecurity Pros?

New Yorkโ€™s First-of-Its-Kind Algorithmic Pricing Law Goes Into Effect

11 November 2025 at 03:29

Personalized Algorithmic Pricing

In a major step toward transparency in digital commerce, New Yorkโ€™s Algorithmic Pricing Disclosure Act officially took effect on November 10, 2025, requiring businesses to disclose when they use personalized algorithmic pricing to determine what consumers pay. The new New York law mandates that any company using automated pricing systems based on personal data must display a clear and visible notice stating, โ€œThis price was set by an algorithm using your personal data.โ€ Companies that fail to comply could face civil penalties of up to $1,000 per violation, marking one of the most stringent algorithmic pricing disclosure requirements in the United States.

Scope and Impact of Personalized Algorithmic Pricing Law

Under the Algorithmic Pricing Disclosure Act, businesses operating in or serving customers within New York must disclose if they use personalized algorithmic pricing โ€” defined as dynamic pricing set by an algorithm that uses personal data. The law broadly defines personal data as any information that identifies or could reasonably be linked, directly or indirectly, to a specific consumer or device. This includes data derived from online behavior, purchase history, device identifiers, or other digital footprints โ€” regardless of whether users voluntarily provided such data. Entities covered by the law include those domiciled or conducting business in New York, regardless of where their headquarters are based, if they promote algorithmically determined prices to consumers in the state. The law also clarifies that certain data uses and sectors are exempt. For instance, location data used solely by transportation network companies and for-hire vehicles to calculate fares based on mileage or trip duration is excluded. Additionally, regulated financial institutions, insurance companies, and businesses offering subscription-based contracts fall outside the Actโ€™s scope.

Court Upholds the Algorithmic Pricing Disclosure Act

Implementation of the Algorithmic Pricing Disclosure Act had been delayed following a First Amendment challenge in the Southern District of New York. The case questioned whether compelling companies to disclose algorithmic pricing practices infringed upon free speech rights. However, the court upheld the lawโ€™s constitutionality, ruling that the required disclosure was โ€œplainly factualโ€ and not controversial merely because businesses might prefer not to reveal their pricing methods. With this ruling, enforcement proceeded without further delay.

Attorney Generalโ€™s Office to Enforce Personalized Algorithmic Pricing Compliance

New York Attorney General Letitia James has made clear her intention to rigorously enforce the new algorithmic pricing disclosure law. On November 5, 2025, her office issued a consumer alert urging residents to report companies that fail to display the required notices through an official online complaint form. The Attorney Generalโ€™s Office is empowered to investigate potential violations whenever there is โ€œreason to believeโ€ a company is not in compliance. This can include complaints from consumers or findings from state-led audits. Violators will first receive a notice to cure alleged violations within a specified period. If they fail to take corrective action, the Attorney General can seek injunctions and monetary penalties โ€” up to $1,000 per instance, without any maximum cap. Importantly, enforcement does not require proof of individual consumer harm or financial loss, making it easier for regulators to act swiftly.

Illuminate Education Fined $5.1 Million for Failing to Protect Student Data

10 November 2025 at 04:17

Illuminate Education Data Breach

The Attorneys General of California, Connecticut, and New York have announced a $5.1 million settlement with Illuminate Education, Inc., an educational technology company, for failing to adequately protect student data in a 2021 cyber incident. The Illuminate Education data breach exposed the personal information of millions of students across the United States, including over 434,000 students in California alone. The settlement includes $3.25 million in civil penalties for California and a series of court-approved requirements to strengthen the companyโ€™s cybersecurity posture. The announcement marks one of the most significant enforcement actions under Californiaโ€™s K-12 Pupil Online Personal Information Protection Act (KOPIPA), highlighting growing regulatory attention on the privacy of childrenโ€™s data in the digital age.

Illuminate Education Data Breach That Exposed Sensitive Student Data

The 2021 Illuminate education data breach occurred when a hacker gained access to Illuminateโ€™s systems using credentials belonging to a former employee, an account that had never been deactivated. Once inside the network, the attacker created new credentials, maintained access for several days, and stole or deleted student data. The compromised information included names, races, medical conditions, and details related to special education services โ€” all considered highly sensitive personal data. An investigation by the California Department of Justice found that Illuminate failed to implement basic cybersecurity practices, including:
  • Terminating access for former employees
  • Monitoring suspicious logins or activities
  • Securing backup databases separately from live systems
Investigators also revealed that Illuminate had made misleading claims in its Privacy Policy, suggesting its safeguards met federal and state requirements when they did not. The company had even advertised itself as a signatory of the Student Privacy Pledge, only to be removed after the breach.

Legal and Regulatory Response

California Attorney General Rob Bonta called the case โ€œa reminder to all tech companies, especially those handling childrenโ€™s data, that California law demands strong safeguards.โ€ โ€œIlluminate failed to appropriately safeguard the data of school children,โ€ Bonta said. โ€œOur investigation revealed troubling security deficiencies that should never have happened for a company entrusted with protecting sensitive data about kids.โ€ Connecticut Attorney General William Tong added that the case marked the first enforcement action under Connecticutโ€™s Student Data Privacy Law. โ€œTechnology is everywhere in schools today,โ€ he said. โ€œThis action holds Illuminate accountable and sends a clear message to educational technology companies that they must take privacy obligations seriously.โ€ New York Attorney General Letitia James echoed similar concerns: โ€œStudents, parents, and teachers should be able to trust that their schoolsโ€™ online platforms are safe and secure. Illuminate violated that trust and failed to take even basic steps to protect student data.โ€

Compliance Measures and Industry Lessons

As part of the settlement, Illuminate has agreed to:
  • Strengthen account management and terminate credentials of former employees.
  • Enable real-time monitoring for suspicious activity.
  • Segregate backup databases from active networks.
  • Notify authorities promptly in case of future breaches.
  • Remind school districts to review stored student data for retention and deletion compliance.
This Illuminate Education data breach case follows several other enforcement actions led by Attorney General Bonta, including settlements with Sling TV, Blackbaud, and Tilting Point Media, each involving data privacy violations.

EdTech Sector Under Radar

The Illuminate case emphasizes the critical need for cybersecurity in educational technology. As schools increasingly depend on digital platforms, student data has become a prime target for cybercriminals. Experts emphasize that proactive measures such as continuous monitoring, identity management, and early threat detection are essential to prevent similar incidents. Platforms like Cyble Vision are designed to help organizations detect breaches, monitor risks in real-time, and safeguard sensitive data against evolving cyber threats. For education providers, regulators, and enterprises alike, this case serves as a clear signal โ€” cyber negligence is no longer an option. To learn how Cyble can help strengthen your organizationโ€™s data protection and threat monitoring capabilities, request a demo and see how proactive intelligence can prevent the next breach.

Meet NEO 1X: The Robot That Does Chores and Spies on You?

10 November 2025 at 00:00

The future of home robotics is here โ€” and itโ€™s a little awkward. Meet the NEO 1X humanoid robot, designed to help with chores but raising huge cybersecurity and privacy questions. We discuss what it can actually do, the risks of having an always-connected humanoid in your home, and why itโ€™s definitely not the โ€œRobot [โ€ฆ]

The post Meet NEO 1X: The Robot That Does Chores and Spies on You? appeared first on Shared Security Podcast.

The post Meet NEO 1X: The Robot That Does Chores and Spies on You? appeared first on Security Boulevard.

๐Ÿ’พ

Radware: Bad Actors Spoofing AI Agents to Bypass Malicious Bot Defenses

8 November 2025 at 12:01
messages, chatbots, Tones, AI Kasada chatbots Radware bad bots non-human machine identity bots

AI agents are increasingly being used to search the web, making traditional bot mitigation systems inadequate and opening the door for malicious actors to develop and deploy bots that impersonate legitimate agents from AI vendors to launch account takeover and financial fraud attacks.

The post Radware: Bad Actors Spoofing AI Agents to Bypass Malicious Bot Defenses appeared first on Security Boulevard.

OpenAIโ€™s ChatGPT Atlas: What It Means for Cybersecurity and Privacy

3 November 2025 at 00:00

In this episode, we explore OpenAIโ€™s groundbreaking release GPT Atlas, the AI-powered browser that remembers your activities and acts on your behalf. Discover its features, implications for enterprise security, and the risks it poses to privacy. Join hosts Tom Eston and Scott Wright as they discuss everything from the browserโ€™s memory function to vulnerabilities like [โ€ฆ]

The post OpenAIโ€™s ChatGPT Atlas: What It Means for Cybersecurity and Privacy appeared first on Shared Security Podcast.

The post OpenAIโ€™s ChatGPT Atlas: What It Means for Cybersecurity and Privacy appeared first on Security Boulevard.

๐Ÿ’พ

FCC Chair Carr Looks to Eliminate Telecom Cybersecurity Ruling

31 October 2025 at 09:46
FCC Commissioner Brendan Carr speaking at the 2018 Conservative Political Action Conference (CPAC) in National Harbor, Maryland.

FCC Chair Brendan Carr said the agency will look to eliminate a declaratory ruling made by his predecessor that aimed to give the government more power to force carriers to strengthen the security of their networks in the wake of the widespread hacks by China nation-state threat group Salt Typhoon last year.

The post FCC Chair Carr Looks to Eliminate Telecom Cybersecurity Ruling appeared first on Security Boulevard.

Threat Actors Weaponizing Open Source AdaptixC2 Tied to Russian Underworld

30 October 2025 at 09:39
Israel, hacktivist, Iran, hacker, hacking, hackers,

AdaptixC2, a legitimate and open red team tool used to assess an organization's security, is being repurposed by threat actors for use in their malicious campaigns. Threat researchers with Silent Push have linked the abuse of the technology back to a Russian-speaking bad actor who calls himself "RalfHacker."

The post Threat Actors Weaponizing Open Source AdaptixC2 Tied to Russian Underworld appeared first on Security Boulevard.

Vinomofo Failed to Protect Customer Data, Australian Privacy Commissioner Rules

30 October 2025 at 08:23

Vinomofo, Privacy Commissioner

Australia's Privacy Commissioner Carly Kind has issued a determination against online wine wholesaler Vinomofo Pty Ltd, finding the company interfered with the privacy of almost one million individuals by failing to take reasonable steps to protect their personal information from security risks.

The determination represents one of the most comprehensive applications of Australian Privacy Principle 11.1 (APP 11.1) to cloud migration projects and provides critical guidance for organizations undertaking similar infrastructure transitions.

The finding follows a 2022 data breach that occurred during a large-scale data migration project, exposing approximately 17GB of data belonging to 928,760 customers and members. The determination goes beyond technical security failures, identifying systemic cultural and governance deficiencies that Commissioner Kind found demonstrated Vinomofo's failure to value or nurture attention to customer privacy.

The Breach: Migration Gone Wrong

In 2022, Vinomofo experienced a data breach amid what the company described as a "large data migration project." An unauthorized third party gained access to the company's database hosted on a testing platform, which, despite being separate from the live website, contained real customer information.

The exposed database held approximately 17GB of data comprising identity information including gender and date of birth, contact information such as names, email addresses, phone numbers, and physical addresses, and financial information. The breach initially came to light when security researcher Troy Hunt flagged the incident on social media, and subsequent investigation revealed the stolen data had been advertised for sale on Russian-language cybercrime forums.

Also read: Wine Company Vinomofo Confirms Data Breach, 500,000 Customers at Risk

The testing platform exposure reveals a fundamental security misconfiguration that has become increasingly common as organizations migrate to cloud infrastructure. Testing and development environments frequently contain production data but receive less rigorous security controls than production systems, creating attractive targets for threat actors who recognize this vulnerability pattern.

Vinomofo's initial public statements downplayed the breach's severity, emphasizing that the company "does not hold identity or financial data such as passports, drivers' licences or credit cards/bank details" and assuring customers that "no passwords, identity documents or financial information were accessed." However, the Privacy Commissioner's investigation revealed more significant failures in the company's security posture and governance.

Privacy as an Afterthought

Perhaps the determination's most significant finding concerns Vinomofo's organizational culture. Commissioner Kind concluded that "Vinomofo's culture and business posture failed to value or nurture attention to customer privacy, as exemplified by failures regarding its policies and procedures, training, and cultural approach to privacy."

This cultural assessment goes beyond technical security measures to examine the organizational prioritization of privacy protection. The Commissioner observed that privacy wasn't embedded into business processes, decision-making frameworks, or corporate valuesโ€”it remained peripheral rather than fundamental to operations.

The determination identified specific manifestations of this cultural failure:

Policy and Procedure Deficiencies: Vinomofo lacked adequate policies governing data handling during migration projects, security requirements for testing environments, and access controls for sensitive customer information.

Training Inadequacies: The company failed to provide sufficient privacy and security training to personnel involved in data migration and infrastructure management, resulting in preventable errors and oversights.

Cultural Approach: Privacy considerations weren't integrated into strategic planning, risk management, or operational decision-making processes, treating privacy compliance as a checkbox exercise rather than a core business imperative.

Known Risks Ignored

The Commissioner's determination revealed that Vinomofo was aware of deficiencies in its security governance and recognized the need to uplift its security posture at least two years prior to the 2022 incident. This finding transforms the breach from an unfortunate accident into a foreseeable consequence of deliberate inaction.

The determination states: "The respondent was aware of the deficiencies in its security governance and that it needed to uplift its security posture at least 2 years prior to the Incident." This awareness without corresponding action demonstrates a failure of corporate governance that extended beyond the IT security function to board and executive leadership levels.

Organizations face resource constraints and competing priorities that can delay security improvements. However, the Commissioner's finding that Vinomofo knew about security deficiencies for two years before the breach eliminates any claim of unforeseen circumstances. This represents a calculated riskโ€”one that ultimately materialized with consequences for nearly one million customers.

The "Reasonable Steps" Standard

The determination centers on Australian Privacy Principle 11.1, which requires entities holding personal information to take "such steps as are reasonable in the circumstances" to protect that information from misuse, interference, loss, unauthorized access, modification, or disclosure.

The Commissioner concluded that "the totality of steps taken by the respondent were not reasonable in the circumstances" to protect the personal information it held. This holistic assessment examines not individual security controls but the comprehensive security program considering organizational context, threat environment, and data sensitivity.

The determination provides valuable guidance on how "reasonable steps" should be interpreted in the context of data migration projects, particularly when using cloud infrastructure providers. Key considerations include:

Cloud Security Responsibilities: Organizations cannot delegate privacy obligations to cloud service providers. While providers like Amazon Web Services (where Vinomofo hosted its database) offer security features and controls, customers remain responsible for properly configuring and managing those controls.

Testing Environment Security: Testing and development environments containing real customer data must receive security controls commensurate with the sensitivity of that data. The separation from production systems doesn't reduce security obligations when personal information is involved.

Migration Risk Management: Data migration projects create heightened security risks during transition periods when data exists in multiple locations, access patterns change, and configurations evolve. Organizations must implement enhanced controls during migrations to address these elevated risks.

Awareness and Action: Knowing about security deficiencies creates an obligation to address them within reasonable timeframes. Extended delays between identifying risks and implementing mitigations may constitute unreasonable conduct under APP 11.1.

Shared Responsibility Misunderstood

The determination's emphasis on cloud infrastructure provider obligations addresses a widespread misunderstanding of the shared responsibility model that governs cloud security. Cloud providers offer infrastructure and security capabilities, but customers must properly configure and manage those capabilities to protect their data.

Amazon Web Services, where Vinomofo stored the exposed database, provides extensive security features including encryption, access controls, network isolation, and monitoring capabilities. However, these features require proper implementation and configuration by customers. A misconfigured S3 bucket, overly permissive access policies, or inadequate network controls can expose data despite the underlying platform's security capabilities.

The breach appears to have resulted from Vinomofo's configuration and management of its AWS environment rather than vulnerabilities in AWS itself. This pattern has become common in cloud data breachesโ€”organizations migrate to cloud platforms attracted by scalability and cost benefits but lack the expertise or diligence to properly secure their cloud deployments.

For organizations using cloud infrastructure providers, the determination establishes clear expectations:

Configuration Management: Organizations must implement rigorous configuration management processes ensuring security settings align with best practices and data protection requirements.

Access Controls: Cloud environments require carefully designed access control policies following least-privilege principles. The flexibility of cloud platforms can create excessive access if not properly managed.

Monitoring and Detection: Cloud platforms provide extensive logging and monitoring capabilities, but organizations must actively use these capabilities to detect suspicious activity and security misconfigurations.

Expertise Requirements: Securing cloud environments requires specialized knowledge. Organizations must ensure personnel managing cloud infrastructure possess appropriate expertise or engage qualified consultants.

The Remedial Declarations

The Commissioner made several declarations requiring Vinomofo to cease certain acts and practices, though specific details weren't disclosed in the public announcement. These declarations typically include requirements to:

Implement comprehensive information security programs addressing identified deficiencies, conduct regular security assessments and audits of systems handling personal information, provide privacy and security training to relevant personnel, establish privacy governance frameworks with clear accountability and oversight, and review and enhance policies and procedures governing data handling, particularly during migration projects.

The declarations serve multiple purposes beyond Vinomofo's specific case. They provide a roadmap for other organizations undertaking similar cloud migrations or managing customer data at scale. They establish regulatory expectations about minimum acceptable security practices. And they create precedent that future enforcement actions can reference when addressing similar failures.

โŒ