Normal view

Received today — 16 December 2025

Veza Extends Reach to Secure and Govern AI Agents

16 December 2025 at 14:06

Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise. Currently in the process of being acquired by ServiceNow, the platform is based on an Access Graph the company previously developed to provide cybersecurity teams with a..

The post Veza Extends Reach to Secure and Govern AI Agents appeared first on Security Boulevard.

Thames Water defers controversial £2.5m in bonuses to bosses

Heavily indebted utility puts back ‘retention payments’ for 21 executives until new year amid search for rescue deal

Thames Water has deferred awarding bosses retention payments totalling £2.5m, avoiding a potentially damaging pre-Christmas row as the heavily indebted utility scrambles to agree a multibillion-pound rescue deal.

Sources at the UK’s biggest water company confirmed the controversial retention payment package for 21 senior executives, which had been due to go out this month, would remain on hold until the new year.

Continue reading...

© Photograph: Toby Melville/Reuters

© Photograph: Toby Melville/Reuters

© Photograph: Toby Melville/Reuters

Post-Quantum Cryptography (PQC): Application Security Migration Guide

16 December 2025 at 06:00

The coming shift to Post-Quantum Cryptography (PQC) is not a distant, abstract threat—it is the single largest, most complex cryptographic migration in the history of cybersecurity. Major breakthroughs are being made with the technology. Google announced on October 22nd, “research that shows, for the first time in history, that a quantum computer can successfully run a verifiable algorithm on hardware, surpassing even the fastest classical supercomputers (13,000x faster).” It has the potential to disrupt every industry. Organizations must be ready to prepare now or pay later. 

The post Post-Quantum Cryptography (PQC): Application Security Migration Guide appeared first on Security Boulevard.

8 Ways the DPDP Act Will Change How Indian Companies Handle Data in 2026 

16 December 2025 at 01:16

DPDP Act

For years, data privacy in India lived in a grey zone. Mobile numbers demanded at checkout counters. Aadhaar photocopies lying unattended in hotel drawers. Marketing messages that arrived long after you stopped using a service. Most of us accepted this as normal, until the law caught up.  That moment has arrived.  The Digital Personal Data Protection Act (DPDP Act), 2023, backed by the Digital Personal Data Protection Rules, 2025 notified by the Ministry of Electronics and Information Technology (MeitY) on 13 November 2025, marks a decisive shift in how personal data must be treated in India. As the country heads into 2026, businesses are entering the most critical phase: execution.  Companies now have an 18-month window to re-engineer systems, processes, and accountability frameworks across IT, legal, HR, marketing, and vendor ecosystems. The change is not cosmetic. It is structural.  As Sandeep Shukla, Director, International Institute of Information Technology Hyderabad (IIIT Hyderabad), puts it bluntly: 
“Well, I can say that Indian Companies so far has been rather negligent of customer's privacy. Anywhere you go, they ask for your mobile number.” 
The DPDP Act is designed to ensure that such casual indifference to personal data does not survive the next decade.  Below are eight fundamental ways the DPDP Act will change how Indian companies handle data in 2026, with real-world implications for businesses, consumers, and the digital economy.

1. Privacy Will Movefromthe Back Office to the Boardroom 

Until now, data protection in Indian organizations largely sat with compliance teams or IT security. That model will not hold in 2026.  The DPDP framework makes senior leadership directly accountable for how personal data is handled, especially in cases of breaches or systemic non-compliance. Privacy risk will increasingly be treated like financial or operational risk. 
According to Shashank Bajpai, CISO & CTSO at YOTTA, “The DPDP Act (2023) becomes operational through Rules notified in November 2025; the result is a staggered compliance timetable that places 2026 squarely in the execution phase. That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 
In 2026, privacy decisions will increasingly sit with boards, CXOs, and risk committees. Metrics such as consent opt-out rates, breach response time, and third-party risk exposure will become leadership-level conversations, not IT footnotes.

2. Consent Will Become Clear, Granular, and Reversible

One of the most visible changes users will experience is how consent is sought.  Under the DPDP Act, consent must be specific, informed, unambiguous, and easy to withdraw. Pre-ticked boxes and vague “by using this service” clauses will no longer be enough. 
As Gauravdeep Singh, State Head (Digital Transformation), e-Mission Team, MeitY, explains, “Data Principal = YOU.” 
Whether it’s a food delivery app requesting location access or a fintech platform processing transaction history, individuals gain the right to control how their data is used—and to change their mind later.

3. Data Hoarding Will Turnintoa Liability 

For many Indian companies, collecting more data than necessary was seen as harmless. Under the DPDP Act, it becomes risky.  Organizations must now define why data is collected, how long it is retained, and how it is securely disposed of. If personal data is no longer required for a stated purpose, it cannot simply be stored indefinitely. 
Shukla highlights how deeply embedded poor practices have been, “Hotels take your aadhaar card or driving license and copy and keep it in the drawers inside files without ever telling the customer about their policy regarding the disposal of such PII data safely and securely.” 
In 2026, undefined retention is no longer acceptable.

4. Third-Party Vendors Will Come Under the Scanner

Data processors like cloud providers, payment gateways, CRM platforms, will no longer operate in the shadows.  The DPDP Act clearly distinguishes between Data Fiduciaries (companies that decide how data is used) and Data Processors (those that process data on their behalf). Fiduciaries remain accountable, even if the breach occurs at a vendor.  This will force companies to: 
  • Audit vendors regularly 
  • Rewrite contracts with DPDP clauses 
  • Monitor cross-border data flows 
As Shukla notes“The shops, E-commerce establishments, businesses, utilities collect so much customer PII, and often use third party data processor for billing, marketing and outreach. We hardly ever get to know how they handle the data.” 
In 2026, companies will be required to audit vendors, strengthen contracts, and ensure processors follow DPDP-compliant practices, because liability remains with the fiduciary.

5. Breach Response Will Be Timed, Tested, and Visible

Data breaches are no longer just technical incidents, they are legal events.  The DPDP Rules require organizations to detect, assess, and respond to breaches with defined processes and accountability. Silence or delay will only worsen regulatory consequences. 
As Bajpai notes, “The practical effect is immediate: companies must move from policy documents to implemented consent systems, security controls, breach workflows, and vendor governance.” 
Tabletop exercises, breach simulations, and forensic readiness will become standard—not optional. 

6. SignificantData Fiduciaries (SDFs) Will Face Heavier Obligations 

Not all companies are treated equally under the DPDP Act. Significant Data Fiduciaries (SDFs)—those handling large volumes of sensitive personal data, will face stricter obligations, including: 
  • Data Protection Impact Assessments 
  • Appointment of India-based Data Protection Officers 
  • Regular independent audits 
Global platforms like Meta, Google, Amazon, and large Indian fintechs will feel the pressure first, but the ripple effect will touch the entire ecosystem.

7. A New Privacy Infrastructure Will Emerge

The DPDP framework is not just regulation—it is ecosystem building. 
As Bajpai observes, “This is not just regulation; it is an economic strategy to build domestic capability in cloud, identity, security and RegTech.” 
Consent Managers, auditors, privacy tech vendors, and compliance platforms will grow rapidly in 2026. For Indian startups, DPDP compliance itself becomes a business opportunity.

8. Trust Will Become a Competitive Advantage

Perhaps the biggest change is psychological. In 2026, users will increasingly ask: 
  • Why does this app need my data? 
  • Can I withdraw consent? 
  • What happens if there’s a breach? 
One Reddit user captured the risk succinctly, “On paper, the DPDP Act looks great… But a law is only as strong as public awareness around it.” 
Companies that communicate transparently and respect user choice will win trust. Those that don’t will lose customers long before regulators step in. 

Preparing for 2026: From Awareness to Action 

As Hareesh Tibrewala, CEO at Anhad, notes, “Organizations now have the opportunity to prepare a roadmap for DPDP implementation.”
For many businesses, however, the challenge lies in turning awareness into action, especially when clarity around timelines and responsibilities is still evolving.  The concern extends beyond citizens to companies themselves, many of which are still grappling with core concepts such as consent management, data fiduciary obligations, and breach response requirements. With penalties tiered by the nature and severity of violations—ranging from significant fines to amounts running into hundreds of crores, this lack of understanding could prove costly.  In 2026, regulators will no longer be looking for intent, they will be looking for evidence of execution. As Bajpai points out, “That makes 2026 the inflection year when planning becomes measurable operational work and when regulators will expect visible progress.” 

What Companies Should Do Now: A Practical DPDP Act Readiness Checklist 

As India moves closer to full DPDP enforcement, organizations that act early will find compliance far less disruptive. At a minimum, businesses should focus on the following steps: 
  • Map personal data flows: Identify what personal data is collected, where it resides, who has access to it, and which third parties process it. 
  • Review consent mechanisms: Ensure consent requests are clear, purpose-specific, and easy to withdraw, across websites, apps, and internal systems. 
  • Define retention and deletion policies: Establish how long different categories of personal data are retained and document secure disposal processes. 
  • Assess third-party risk: Audit vendors, cloud providers, and processors to confirm DPDP-aligned controls and contractual obligations. 
  • Strengthen breach response readiness: Put tested incident response and notification workflows in place, not just policies on paper. 
  • Train employees across functions: Build awareness beyond IT and legal teams, privacy failures often begin with everyday operational mistakes. 
  • Assign ownership and accountability: Clearly define who is responsible for DPDP compliance, reporting, and ongoing monitoring. 
These steps are not about ticking boxes; they are about building muscle memory for a privacy-first operating environment. 

2026 Is the Year Privacy Becomes Real 

The DPDP Act does not promise instant perfection. What it demands is accountability.  By 2026, privacy will move from policy documents to product design, from legal fine print to leadership dashboards, and from reactive fixes to proactive governance. Organizations that delay will not only face regulatory penalties, but they also risk losing customer trust in an increasingly privacy-aware market. 
As Sandeep Shukla cautions, “It will probably take years before a proper implementation at all levels of organizations would be seen.” 
But the direction is clear. Personal data in India can no longer be treated casually.  The DPDP Act marks the end of informal data handling, and the beginning of a more disciplined, transparent, and accountable digital economy. 

DORA Compliance Checklist for Cybersecurity

15 December 2025 at 16:38

The Digital Operational Resilience Act (DORA) is now in full effect, and financial institutions across the EU face mounting pressure to demonstrate robust ICT risk management and cyber resilience. With...

The post DORA Compliance Checklist for Cybersecurity appeared first on Security Boulevard.

Received yesterday — 15 December 2025

FBI Cautions Alaskans Against Phone Scams Using Fake Arrest Threats

15 December 2025 at 06:49

FBI Warns

The FBI Anchorage Field Office has issued a public warning after seeing a sharp increase in fraud cases targeting residents across Alaska. According to federal authorities, scammers are posing as law enforcement officers and government officials in an effort to extort money or steal sensitive personal information from unsuspecting victims.

The warning comes as reports continue to rise involving unsolicited phone calls where criminals falsely claim to represent agencies such as the FBI or other local, state, and federal law enforcement bodies operating in Alaska. These scams fall under a broader category of law enforcement impersonation scams, which rely heavily on fear, urgency, and deception.

How the Phone Scam Works

Scammers typically contact victims using spoofed phone numbers that appear legitimate. In many cases, callers accuse individuals of failing to report for jury duty or missing a court appearance. Victims are then told that an arrest warrant has been issued in their name.

To avoid immediate arrest or legal consequences, the caller demands payment of a supposed fine. Victims are pressured to act quickly, often being told they must resolve the issue immediately. According to the FBI, these criminals may also provide fake court documents or reference personal details about the victim to make the scam appear more convincing.

In more advanced cases, scammers may use artificial intelligence tools to enhance their impersonation tactics. This includes generating realistic voices or presenting professionally formatted documents that appear to come from official government sources. These methods have contributed to the growing sophistication of government impersonation scams nationwide.

Common Tactics Used by Scammers

Authorities note that these scams most often occur through phone calls and emails. Criminals commonly use aggressive language and insist on speaking only with the targeted individual. Victims are often told not to discuss the call with family members, friends, banks, or law enforcement agencies.

Payment requests are another key red flag. Scammers typically demand money through methods that are difficult to trace or reverse. These include cash deposits at cryptocurrency ATMs, prepaid gift cards, wire transfers, or direct cryptocurrency payments. The FBI has emphasized that legitimate government agencies never request payment through these channels.

FBI Clarifies What Law Enforcement Will Not Do

The FBI has reiterated that it does not call members of the public to demand payment or threaten arrest over the phone. Any call claiming otherwise should be treated as fraudulent. This clarification is a central part of the FBI’s broader FBI scam warning Alaska residents are being urged to take seriously.

Impact of Government Impersonation Scams

Data from the FBI’s Internet Crime Complaint Center (IC3) highlights the scale of the problem. In 2024 alone, IC3 received more than 17,000 complaints related to government impersonation scams across the United States. Reported losses from these incidents exceeded $405 million nationwide.

Alaska has not been immune. Reported victim losses in the state surpassed $1.3 million, underscoring the financial and emotional impact these scams can have on individuals and families.

How Alaskans Can Protect Themselves

To reduce the risk of falling victim, the FBI urges residents to “take a beat” before responding to any unsolicited communication. Individuals should resist pressure tactics and take time to verify claims independently.

The FBI strongly advises against sharing or confirming personally identifiable information with anyone contacted unexpectedly. Alaskans are also cautioned never to send money, gift cards, cryptocurrency, or other assets in response to unsolicited demands.

What to Do If You Are Targeted

Anyone who believes they may have been targeted or victimized should immediately stop communicating with the scammer. Victims should notify their financial institutions, secure their accounts, contact local law enforcement, and file a complaint with the FBI’s Internet Crime Complaint Center at www.ic3.gov. Prompt reporting can help limit losses and prevent others from being targeted.

Received before yesterday

LGPD (Brazil)

14 December 2025 at 04:30

What is the LGPD (Brazil)? The Lei Geral de Proteção de Dados Pessoais (LGPD), or General Data Protection Law (Law No. 13.709/2018), is Brazil’s comprehensive data protection framework, inspired by the European Union’s GDPR. It regulates the collection, use, storage, and sharing of personal data, applying to both public and private entities, regardless of industry, […]

The post LGPD (Brazil) appeared first on Centraleyes.

The post LGPD (Brazil) appeared first on Security Boulevard.

New laws to be considered after ‘harrowing stories’ from ex-Vodafone franchisees

12 December 2025 at 02:00

Concerns about power imbalance in franchise agreements amid claims over firm’s treatment of small-business owners

The government will consider new laws to correct the power imbalance in franchise agreements in response to the “harrowing stories” of small business people running Vodafone stores.

The move follows allegations of suicide and attempted suicide among shopkeepers who had agreed to deals to run retail outlets for the £18bn telecoms company, which were revealed by the Guardian on Monday.

Continue reading...

© Photograph: Andy Rain/EPA

© Photograph: Andy Rain/EPA

© Photograph: Andy Rain/EPA

City of Cambridge Advises Password Reset After Nationwide CodeRED Data Breach

12 December 2025 at 00:56

City of Cambridge

The City of Cambridge has released an important update regarding the OnSolve CodeRED emergency notifications system, also known locally as Cambridge’s reverse 911 system. The platform, widely used by thousands of local governments and public safety agencies across the country, was taken offline in November following a nationwide OnSolve CodeRED cyberattack. Residents who rely on CodeRED alerts for information about snow emergencies, evacuations, water outages, or other service disruptions are being asked to take immediate steps to secure their accounts and continue receiving notifications.

Impact of the OnSolve CodeRED Cyberattack on User Data

According to city officials, the data breach affected CodeRED databases nationwide, including Cambridge. The compromised information may include phone numbers, email addresses, and passwords of registered users. Importantly, the attack targeted the OnSolve CodeRED system itself, not the City of Cambridge or its departments. This OnSolve CodeRED cyberattack incident mirrors similar concerns raised in Monroe County, Georgia, where officials confirmed that residents’ personal information was also exposed. The Monroe County Emergency Management Agency emphasized that the breach was part of a nationwide cybersecurity incident and not a local failure.

Transition to CodeRED by Crisis24

In response, OnSolve permanently decommissioned the old CodeRED platform and migrated services to a new, secure environment known as CodeRED by Crisis24. The new system has undergone comprehensive security audits, including penetration testing and system hardening, to ensure stronger protection against future threats. For Cambridge residents, previously registered contact information has been imported into the new platform. However, due to security concerns, all passwords have been removed. Users must now reset their credentials before accessing their accounts.

Steps for City of Cambridge Residents and Users

To continue receiving emergency notifications, residents should:
  • Visit accountportal.onsolve.net/cambridgema
  • Enter their username (usually an email address)
  • Select “forgot password” to verify and reset credentials
  • If unsure of their username, use the “forgot username” option
Officials strongly advise against reusing old CodeRED passwords, as they may have been compromised. Instead, users should create strong, unique passwords and update their information once logged in. Additionally, anyone who used the same password across multiple accounts is urged to change those credentials immediately to reduce the risk of further exposure.

Broader National Context

The Monroe County cyberattack highlights the scale of the issue. Officials there reported that data such as names, addresses, phone numbers, and passwords were compromised. Residents who enrolled before March 31, 2025, had their information migrated to the new Crisis24 CodeRED platform, while those who signed up afterward must re‑enroll. OnSolve has reassured communities that the intrusion was contained within the original system and did not spread to other networks. While there is currently no evidence of identity theft, the incident underscores the growing risks of cyber intrusions nationwide.

Resources for Cybersecurity Protection

Residents who believe they may have been victims of cyber‑enabled fraud are encouraged to report incidents to the FBI Internet Crime Complaint Center (IC3) at ic3.gov. Additional resources are available to help protect individuals and families from fraud and cybercrime. Security experts note that the rising frequency of attacks highlights the importance of independent threat‑intelligence providers. Companies such as Cyble track vulnerabilities and cybercriminal activity across global networks, offering organizations tools to strengthen defenses and respond more quickly to incidents.

Looking Ahead

The City of Cambridge has thanked residents for their patience as staff worked with OnSolve to restore emergency alert capabilities. Officials emphasized that any breach of security is a serious concern and confirmed that they will continue monitoring the new CodeRED by Crisis24 platform to ensure its standards are upheld. In addition, the City is evaluating other emergency alerting systems to determine the most effective long‑term solution for community safety.

AIs Exploiting Smart Contracts

11 December 2025 at 12:06

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense...

The post AIs Exploiting Smart Contracts appeared first on Security Boulevard.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Ring-fencing AI Workloads for NIST and ISO Compliance 

10 December 2025 at 12:32

AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on 12Port.

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on Security Boulevard.

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

Cultural Lag Leaves Security as the Weakest Link

5 December 2025 at 11:19
cybersecurity

For too long, security has been cast as a bottleneck – swooping in after developers build and engineers test to slow things down. The reality is blunt; if it’s bolted on, you’ve already lost. The ones that win make security part of every decision, from the first line of code to the last boardroom conversation...

The post Cultural Lag Leaves Security as the Weakest Link appeared first on Security Boulevard.

European Court Imposes Strict New Data Checks on Online Marketplace Ads

3 December 2025 at 00:34

CJEU ruling

The CJEU ruling by the Court of Justice of the European Union on Tuesday has made it clear that online marketplaces are responsible for the personal data that appears in advertisements on their platforms. The Court of Justice of the European Union decision makes clear that platforms must get consent from any person whose data is shown in an advertisement, and must verify ads before they go live, especially where sensitive data is involved. The CJEU ruling comes from a 2018 incident in Romania. A fake advertisement on the classifieds website publi24.ro claimed a woman was offering sexual services. The post included her photos and phone number, which were used without her permission. The operator of the site, Russmedia Digital, removed the ad within an hour, but by then it had already been copied to other websites. The woman said the ad harmed her privacy and reputation and took the company to court. Lower courts in Romania gave different decisions, so the case was referred to the Court of Justice of the European Union for clarity. The CJEU has now confirmed that online marketplaces are data controllers under the GDPR for the personal data contained in ads on their sites.

CJEU Ruling: What Online Marketplaces Must Do Now

The court said that marketplace operators must take more responsibility and cannot rely on old rules that protect hosting services from liability. From now on, platforms must:
  • Check ads before publishing them when they contain personal or sensitive data.
  • Confirm that the person posting the ad is the same person shown in the ad, or make sure the person shown has given explicit consent.
  • Refuse ads if consent or identity verification cannot be confirmed.
  • Put measures in place to help prevent sensitive ads from being copied and reposted on other websites.
These steps must be part of the platform’s regular technical and organisational processes to comply with the GDPR.

What This Means for Platforms Across The EU

Legal teams at Pinsent Masons warned the decision “will likely have major implications for data protection across the 27 member states.” Nienke Kingma of Pinsent Masons said the ruling is important for compliance, adding it is “setting a new standard for data protection compliance across the EU.” Thijs Kelder, also at Pinsent Masons, said: “This judgment makes clear that online marketplaces cannot avoid their obligations under the GDPR,” and noted the decision “increases the operational risks on these platforms,” meaning companies will need stronger risk management. Daphne Keller of Stanford Law School warned about wider effects on free expression and platform design, noting the ruling “has major implications for free expression and access to information, age verification and privacy.”

Practical Impact

The CJEU ruling decision marks a major shift in how online marketplaces must operate. Platforms that allow users to post adverts will now have to rethink their processes, from verifying identities and checking personal data before an ad goes live to updating their terms and investing in new technical controls. Smaller platforms may feel the pressure most, as the cost of building these checks could be significant. What happens next will depend on how national data protection authorities interpret the ruling and how quickly companies can adapt. The coming months will reveal how verification should work in practice, what measures count as sufficient protection against reposting, and how platforms can balance these new duties with user privacy and free expression. The ruling sets a strict new standard, and its real impact will become clearer as regulators, courts, and platforms begin to implement it.

Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity

2 December 2025 at 12:35
security, risk, vector

Organizations are spending more than ever on cybersecurity, layering defenses around networks, endpoints, and applications. Yet a company’s documents, one of the most fundamental business assets, remains an overlooked weak spot. Documents flow across every department, cross company boundaries, and often contain the very data that compliance officers and security teams work hardest to protect...

The post Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity appeared first on Security Boulevard.

Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

2 December 2025 at 11:38

APT31, Australian Parliament, AI Safety Institute, National AI Plan

Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.

The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.

The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.

Dual Focus on Upstream Risks and Downstream Harms

The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.

Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.

The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.

Supporting Coordinated Regulatory Response

The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.

Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.

The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.

Addressing Specific AI Harms

The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.

The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.

The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.

Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.

Also read: CPA Australia Warns: AI Adoption Accelerates Cyber Risks for Australian Businesses

National Security and Crisis Response

The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.

AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.

The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

International Engagement

The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.

Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.

The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.

Also read: UK’s AI Safety Institute Establishes San Francisco Office for Global Expansion

GPS Spoofing Detected Across Major Indian Airports; Government Tightens Security

2 December 2025 at 00:37

GPS Spoofing

The Union government of India, the country’s central federal administration, on Monday confirmed several instances of GPS spoofing near Delhi’s Indira Gandhi International Airport (IGIA) and other major airports. Officials said that despite the interference, all flights continued to operate safely and without disruption. The clarification came after reports pointed to digital interference affecting aircraft navigation systems during approach procedures at some of the busiest airports in the country.

What Is GPS Spoofing?

GPS spoofing is a form of signal interference where false Global Positioning System (GPS) signals are broadcast to mislead navigation systems. For aircraft, it can temporarily confuse onboard systems about their true location or altitude. While pilots and air traffic controllers are trained to manage such situations, repeated interference requires immediate reporting and stronger safeguards.

Government Confirms Incidents at Multiple Airports

India’s Civil Aviation Minister Ram Mohan Naidu informed Parliament that several flights approaching Delhi reported GPS spoofing while using satellite-based landing procedures on Runway 10. In a written reply to the Rajya Sabha, the minister confirmed that similar signal interference reports have been received from several India’s major airports, including Mumbai, Kolkata, Hyderabad, Bengaluru, Amritsar, and Chennai. He explained that when GPS spoofing was detected in Delhi, contingency procedures were activated for flights approaching the affected runway. The rest of the airport continued functioning normally through conventional ground-based navigation systems, preventing any impact on overall flight operations.

Safety Procedures and New Reporting System

The Directorate General of Civil Aviation (DGCA) has issued a Standard Operating Procedure (SOP) for real-time reporting of GPS spoofing and Global Navigation Satellite System (GNSS) interference around IGI Airport. The minister added that since DGCA made reporting mandatory in November 2023, regular interference alerts have been received from major airports across the country. These reports are helping regulators identify patterns and respond more quickly to any navigation-related disturbances. India continues to maintain a network of traditional navigation and surveillance systems such as Instrument Landing Systems (ILS) and radar. These systems act as dependable backups if satellite-based navigation is interrupted, following global aviation best practices.

Airports on High Cyber Vigilance

The government said India is actively engaging with global aviation bodies to stay updated on the latest technologies, methods, and safety measures related to aviation cybersecurity. Meanwhile, the Airports Authority of India (AAI) is deploying advanced cybersecurity tools across its IT infrastructure to strengthen protection against potential digital threats. Although the cyber-related interference did not affect flight schedules, the confirmation of GPS spoofing attempts at major airports has led to increased monitoring across key aviation hubs. These airports handle millions of passengers every year, making continuous vigilance essential.

Recent Aviation Challenges

The GPS spoofing reports come shortly after a separate system failure at Delhi Airport in November, which caused major delays. That incident was later linked to a technical issue with the Automatic Message Switching System (AMSS) and was not related to cyber activity. The aviation sector also faced another challenge recently when Airbus A320 aircraft required an urgent software update. The A320, widely used in India, led to around 388 delayed flights on Saturday. All Indian airlines completed the required updates by Sunday, allowing normal operations to resume. Despite reports of interference, the Union government emphasised that there was no impact on passenger safety or flight operations. Established procedures, trained crews, and reliable backup systems ensured that aircraft continued operating normally. Authorities said they will continue monitoring navigation systems closely and strengthening cybersecurity measures across airports to safeguard India’s aviation network.

Cybersecurity Coalition to Government: Shutdown is Over, Get to Work

28 November 2025 at 13:37
budget open source supply chain cybersecurity ransomware White House Cyber Ops

The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.

The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

Account Takeover Scams Surge as FBI Reports Over $262 Million in Losses

26 November 2025 at 00:34

Account Takeover fraud

The Account Takeover fraud threat is accelerating across the United States, prompting the Federal Bureau of Investigation (FBI) to issue a new alert warning individuals, businesses, and organizations of all sizes to stay vigilant. According to the FBI Internet Crime Complaint Center (IC3), more than 5,100 complaints related to ATO fraud have been filed since January 2025, with reported losses exceeding $262 million. The bureau warns that cyber criminals are increasingly impersonating financial institutions to steal money or sensitive information. As the annual Black Friday sale draws millions of shoppers online, the FBI notes that the surge in digital purchases creates an ideal environment for Account Takeover fraud. With consumers frequently visiting unfamiliar retail websites and acting quickly to secure limited-time deals, cyber criminals deploy fake customer support calls, phishing pages, and fraudulent ads disguised as payment or discount portals. The increased online activity during Black Friday makes it easier for attackers to blend in and harder for victims to notice red flags, making the shopping season a lucrative window for ATO scams.

How Account Takeover Fraud Works

In an ATO scheme, cyber criminals gain unauthorized access to online financial, payroll, or health savings accounts. Their goal is simple: steal funds or gather personal data that can be reused for additional fraudulent activities. The FBI notes that these attacks often start with impersonation, either of a financial institution’s staff, customer support teams, or even the institution’s official website. To carry out their schemes, criminals rely heavily on social engineering and phishing websites designed to look identical to legitimate portals. These tactics create a false sense of trust, encouraging account owners to unknowingly hand over their login credentials.

Social Engineering Tactics Increase in Frequency

The FBI highlights that most ATO cases begin with social engineering, where cyber criminals manipulate victims into sharing sensitive information such as passwords, multi-factor authentication (MFA) codes, or one-time passcodes (OTP). Common techniques include:
  • Fraudulent text messages, emails, or calls claiming unusual activity or unauthorized charges. Victims are often directed to click on phishing links or speak to fake customer support representatives.
  • Attackers posing as bank employees or technical support agents who convince victims to share login details under the guise of preventing fraudulent transactions.
  • Scenarios where cyber criminals claim the victim’s identity was used to make unlawful purchases—sometimes involving firearms, and escalate the scam by introducing another impersonator posing as law enforcement.
Once armed with stolen credentials, criminals reset account passwords and gain full control, locking legitimate users out of their own accounts.

Phishing Websites and SEO Poisoning Drive More Losses

Another growing trend is the use of sophisticated phishing domains and websites that perfectly mimic authentic financial institution portals. Victims believe they are logging into their bank or payroll system, but instead, they are handing their details directly to attackers. The FBI also warns about SEO poisoning, a method in which cyber criminals purchase search engine ads or manipulate search rankings to make fraudulent sites appear legitimate. When victims search for their bank online, these deceptive ads redirect them to phishing sites that capture their login information. Once attackers secure access, they rapidly transfer funds to criminal-controlled accounts—many linked to cryptocurrency wallets—making transactions difficult to trace or recover.

How to Stay Protected Against ATO Fraud

The FBI urges customers and businesses to take proactive measures to defend against ATO fraud attempts:
  • Limit personal information shared publicly, especially on social media.
  • Monitor financial accounts regularly for missing deposits, unauthorized withdrawals, or suspicious wire transfers.
  • Use unique, complex passwords and enable MFA on all accounts.
  • Bookmark financial websites and avoid clicking on search engine ads or unsolicited links.
  • Treat unexpected calls, emails, or texts claiming to be from a bank with skepticism.

What To Do If You Experience an Account Takeover

Victims of ATO fraud are advised to act quickly:
  1. Contact your financial institution immediately to request recalls or reversals, and report the incident to IC3.gov.
  2. Reset all compromised credentials, including any accounts using the same passwords.
  3. File a detailed complaint at IC3.gov with all relevant information, such as impersonated institutions, phishing links, emails, or phone numbers used.
  4. Notify the impersonated company so it can warn others and request fraudulent sites be taken down.
  5. Stay informed through updated alerts and advisories published on IC3.gov.

NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation

25 November 2025 at 03:06

SANTA CLARA, Calif., Nov 25, 2025 – Recently, NSFOCUS Generative Pre-trained Transformer (NSFGPT) and Intelligent Security Operations Platform (NSFOCUS ISOP) were recognized by the internationally renowned consulting firm Frost & Sullivan and won the 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation [1]. Frost & Sullivan Best Practices Recognition awards companies each year in […]

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on Security Boulevard.

US Imposes Sanctions on Burma Over Cyber Scam Operations

13 November 2025 at 02:12

US Treasury Sanctions Burma

The US Treasury Sanctions Burma armed group and several related companies for their alleged involvement in cyber scam centers targeting American citizens. The Department of the Treasury’s Office of Foreign Assets Control (OFAC) announced the designations as part of a broader effort to combat organized crime, human trafficking, and cybercriminal activities operating out of Southeast Asia. According to the Treasury Department, OFAC has sanctioned the Democratic Karen Benevolent Army (DKBA), a Burmese armed group, and four of its senior leaders for supporting cyber scam centers in Burma. These operations reportedly defraud Americans through fraudulent investment schemes.

US Treasury Sanctions Burma: OFAC Targets Armed Group and Associated Firms

The agency also designated Trans Asia International Holding Group Thailand Company Limited, Troth Star Company Limited, and Thai national Chamu Sawang, citing links to Chinese organized crime networks. These entities were found to be working with the DKBA and other armed groups to establish and expand scam compounds in the region. Under Secretary of the Treasury for Terrorism and Financial Intelligence John K. Hurley stated, “criminal networks operating out of Burma are stealing billions of dollars from hardworking Americans through online scams.” He emphasized that such activities not only exploit victims financially but also contribute to Burma’s civil conflict by funding armed organizations.

Scam Center Strike Force Established

In coordination with agencies including the Federal Bureau of Investigation (FBI), U.S. Secret Service (USSS), and Department of Justice, a new Scam Center Strike Force has been launched to counter cyber scams originating from Burma, Cambodia, and Laos. This task force will focus on investigating and disrupting the most harmful Southeast Asian scam centers, while also supporting U.S. victims through education and restitution programs. The initiative aims to combine law enforcement, financial action, and diplomatic efforts to curb illicit online operations. [caption id="attachment_106706" align="aligncenter" width="432"]US Treasury Sanctions Burma Source: Department of the Treasury’s Office of Foreign Assets Control (OFAC)[/caption]

An Ongoing Effort to Protect Victims

The US Treasury Sanctions Burma action builds on previous measures targeting illicit actors in the region. Earlier in 2025, the Karen National Army (KNA) and several related companies were sanctioned for their roles in human trafficking and cyber scam activities. Additional designations in Cambodia and Burma followed, targeting groups such as the Prince Group and Huione Group for operating scam compounds and laundering proceeds from virtual currency investment scams. According to government reports, Americans lost over $10 billion in 2024 to Southeast Asia-based cyber scam operations, marking a 66 percent increase from the previous year.

Cyber Scams and Human Trafficking Links

Investigations revealed that many individuals working in scam centers are victims of human trafficking, coerced into online fraud through threats and violence. Some compounds, including Tai Chang and KK Park in Burma’s Karen State, are known hubs for cyber scams. The DKBA reportedly provides protection for these compounds while also participating in violent acts against trafficked workers. These scam networks often use messaging apps and fake investment platforms to deceive Americans. Victims are manipulated into transferring funds to scam-controlled accounts under the guise of legitimate investments.

Sanctions and Legal Implications

Following today’s actions, all property and interests of the designated individuals and entities within the United States are now blocked. The sanctions prohibit any U.S. person from engaging in transactions involving these blocked parties. Violations of OFAC regulations could lead to civil or criminal penalties. The US Treasury Sanctions Burma initiative underscores the United States’ continued commitment to disrupting global cyber scam operations, holding organized crime networks accountable, and safeguarding victims of human trafficking and financial exploitation.

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirements—a point Judge Stein emphasized—the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAI 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAI’s SearchGPT: A Game Changer or Pandora’s Box for Cybersecurity Pros?

UK Tightens Cyber Laws as Attacks Threaten Hospitals, Energy, and Transport

12 November 2025 at 00:44

Cyber Security and Resilience Bill

The UK government has unveiled the Cyber Security and Resilience Bill, a landmark move to strengthen UK cyber defences across essential public services, including healthcare, transport, water, and energy. The legislation aims to shield the nation’s critical national infrastructure from increasingly complex cyberattacks, which have cost the UK economy nearly £15 billion annually. According to the latest Cyble report — “Europe’s Threat Landscape: What 2025 Exposed and Why 2026 Could Be Worse”, Europe witnessed over 2,700 cyber incidents in 2025 across sectors such as BFSI, Government, Retail, and Energy. The report highlights how ransomware groups and politically motivated hacktivists have reshaped the regional threat landscape, emphasizing the urgency of unified cyber resilience strategies.

Cyber Security and Resilience Bill to Protect Critical National Infrastructure

At the heart of the new Cyber Security and Resilience Bill is the protection of vital services that people rely on daily. The legislation will ensure hospitals, water suppliers, and transport operators are equipped with stronger cyber resilience capabilities to prevent service disruptions and mitigate risks from future attacks. The Cyber Security and Resilience Bill will, for the first time, regulate medium and large managed service providers offering IT, cybersecurity, and digital support to organisations like the NHS. These providers will be required to report significant incidents promptly and maintain contingency plans for rapid recovery. Regulators will also gain authority to designate critical suppliers — such as diagnostic service providers or energy suppliers — and enforce minimum security standards to close supply chain gaps that cybercriminals could exploit. To strengthen compliance, enforcement will be modernised with turnover-based penalties for serious violations, ensuring cybersecurity remains a non-negotiable priority. The Technology Secretary will also have powers to direct organisations, including NHS Trusts and utilities, to take urgent actions to mitigate threats to national security.

UK Cyber Defences Face Mounting Pressure Amid Rising Attacks

Recent data shows the average cost of a significant cyberattack in the UK now exceeds £190,000, amounting to nearly £14.7 billion in total annual losses. The Office for Budget Responsibility (OBR) warns that a large-scale attack on critical national infrastructure could push borrowing up by £30 billion, equivalent to 1.1% of GDP. These findings align closely with Cyble’s Europe’s Threat Landscape report, which observed the rise of new ransomware groups like Qilin and Akira and a surge in pro-Russian hacktivism targeting European institutions through DDoS and defacement campaigns. The report also revealed that the retail sector accounted for 41% of all compromised access sales, demonstrating the widespread impact of evolving cybercrime tactics. Both the government and industry experts agree that defending against these threats requires a unified approach. National Cyber Security Centre (NCSC) CEO Dr. Richard Horne emphasised that “the real-world impacts of cyberattacks have never been more evident,” calling the Bill “a crucial step in protecting our most critical services.”

Building a Secure and Resilient Future

The Cyber Security and Resilience Bill represent a major shift in how the UK safeguards its people, economy, and digital ecosystem. By tightening cyber regulations for essential and digital services, the government seeks to reduce vulnerabilities and strengthen the UK’s cyber resilience posture for the years ahead. Industry leaders have welcomed the legislation. Darktrace CEO Jill Popelka praised the government’s initiative to modernise cyber laws in an era where attackers are leveraging AI-driven tools. Cisco UK’s CEO Sarah Walker also noted that only 8% of UK organisations are currently “mature” in their cybersecurity readiness, highlighting the importance of continuous improvement. Meanwhile, the Cyble report on Europe’s Threat Landscape warns that as state-backed operations merge with financially motivated attacks, 2026 could bring even more volatility. Cyble Research and Intelligence Labs recommend that organisations adopt intelligence-led defence strategies and proactive threat monitoring to stay ahead of emerging adversaries.

The Road Ahead

Both the Cyber Security and Resilience Bill and Cyble’s Europe’s Threat Landscape findings serve as a wake-up call: the UK and Europe are facing a new era of persistent cyber risks. Strengthening collaboration between government, regulators, and private industry will be key to securing critical systems and ensuring operational continuity. Organizations can explore deeper insights and practical recommendations from Cyble’s Europe’s Threat Landscape: What 2025 Exposed — and Why 2026 Could Be Worse report here, which provides detailed sectoral analysis and strategies to build a stronger, more resilient future against cyber threats.

Global GRC Platform Market Set to Reach USD 127.7 Billion by 2033

12 November 2025 at 00:36

GRC Platform Market

The GRC platform market is witnessing strong growth as organizations across the globe focus on strengthening governance, mitigating risks, and meeting evolving compliance demands. According to recent estimates, the market was valued at USD 49.2 billion in 2024 and is projected to reach USD 127.7 billion by 2033, growing at a CAGR of 11.18% between 2025 and 2033.

This GRC platform market growth reflects the increasing need to protect sensitive data, manage cyber risks, and streamline regulatory compliance processes.

Rising Need for Governance, Risk, and Compliance Solutions

As cyberthreats continue to rise, enterprises are turning to GRC platforms to gain centralized visibility into their risk posture. These solutions help organizations identify, assess, and respond to potential risks, ensuring stronger governance and reduced operational disruption.

The market’s momentum is also fueled by heightened regulatory scrutiny and the introduction of new compliance frameworks worldwide. Businesses are under pressure to maintain transparency, accuracy, and accountability in their governance and reporting processes — areas where a GRC platform adds significant value.

By integrating governance, risk, and compliance management into one system, companies can make informed decisions, reduce human error, and ensure consistent adherence to evolving regulations.

 GRC Platform Market Insights and Key Segments

The GRC platform market is segmented based on deployment model, solution, component, end-user, and industry vertical.

  • Deployment Model: The on-premises deployment model dominates the market due to enhanced security and customization options. It is preferred by organizations handling sensitive data or operating under strict regulatory environments.

  • Solution Type: Compliance management holds the largest market share as businesses prioritize automation of documentation, tracking, and reporting to stay audit-ready.

  • Component: Software solutions lead the market by offering analytics, policy management, and workflow automation to streamline risk processes.

  • End User: Medium enterprises represent the largest segment, focusing on scalable solutions that balance security and efficiency.

  • Industry Vertical: The BFSI sector remains a key adopter due to its complex regulatory landscape and high data security requirements.

Key Drivers of the GRC Platform Market

Several factors contribute to the rapid expansion of the GRC platform market:

  1. Escalating Cyber Risks: As cyber incidents become more frequent and sophisticated, organizations seek to integrate cybersecurity measures within GRC frameworks. These integrations improve detection, response, and recovery capabilities.

  2. Evolving Compliance Standards: Increasing regulatory pressure drives adoption of GRC solutions to ensure businesses stay aligned with global standards like GDPR, HIPAA, and ISO 27001.

  3. Automation and Efficiency: Advanced GRC software reduces manual reporting and enhances accuracy, enabling faster audit responses and improved decision-making.

  4. Operational Resilience: A robust GRC system ensures business continuity by minimizing vulnerabilities and improving crisis management strategies.

Regional Outlook and Future Trends

North America currently leads the GRC platform market, supported by mature digital infrastructure and strong regulatory frameworks. Meanwhile, the Asia-Pacific region is emerging as a key growth area, driven by increased cloud adoption and a rising focus on data privacy.

In the coming years, integration with AI, analytics, and threat intelligence tools will transform how organizations approach governance and risk. The market is expected to evolve toward more predictive and adaptive compliance solutions.

Leveraging Threat Intelligence for Stronger Risk Governance

As organizations expand their digital ecosystems, threat intelligence has become a vital part of effective risk management. Platforms like Cyble help enterprises identify, monitor, and mitigate emerging cyber risks before they escalate. Integrating such intelligence-driven insights into a GRC platform strengthens visibility and helps build a proactive security posture.

For security leaders aiming to align governance with real-time intelligence, exploring a quick free demo of integrated risk and compliance tools can offer valuable perspective on enhancing organizational resilience.

FCC Set to Reverse Course on Telecom Cybersecurity Mandate

31 October 2025 at 07:36

FCC, Federal Communications Commission, Cybersecurity Mandate

The Federal Communications Commission will vote next month to rescind a controversial January 2025 Declaratory Ruling that attempted to impose sweeping cybersecurity requirements on telecommunications carriers by reinterpreting a 1994 wiretapping law.

In an Order on Reconsideration circulated Thursday, the FCC concluded that the previous interpretation was both legally erroneous and ineffective at promoting cybersecurity.

The reversal marks a dramatic shift in the FCC's approach to telecommunications security, moving away from mandated requirements toward voluntary industry collaboration—particularly in response to the massive Salt Typhoon espionage campaign sponsored by China that compromised at least eight U.S. communications companies in 2024.

CALEA Reinterpretation

On January 16, 2025—just five days before a change in administration—the FCC adopted a Declaratory Ruling claiming that section 105 of the Communications Assistance for Law Enforcement Act (CALEA) "affirmatively requires telecommunications carriers to secure their networks from unlawful access to or interception of communications."

CALEA, enacted in 1994, was designed to preserve law enforcement's ability to conduct authorized electronic surveillance as telecommunications technology evolved. Section 105 specifically requires that interception of communications within a carrier's "switching premises" can only be activated with a court order and with intervention by a carrier employee.

The January ruling took this narrow provision focused on lawful wiretapping and expanded it dramatically, interpreting it as requiring carriers to prevent all unauthorized interceptions across their entire networks. The Commission stated that carriers would be "unlikely" to satisfy these obligations without adopting basic cybersecurity practices including role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication.

The ruling emphasized that "enterprise-level implementation of these basic cybersecurity hygiene practices is necessary" because vulnerabilities in any part of a network could provide attackers unauthorized access to surveillance systems. It concluded that carriers could be in breach of statutory obligations if they failed to adopt certain cybersecurity practices—even without formal rules adopted by the Commission.

Industry Pushback and Legal Questions

CTIA – The Wireless Association, NCTA – The Internet & Television Association, and USTelecom – The Broadband Association filed a petition for reconsideration on February 18, arguing that the ruling exceeded the FCC's statutory authority and misinterpreted CALEA.

The new FCC agreed with these concerns, finding three fundamental legal flaws in the January ruling:

Enforcement Authority: The Commission concluded it lacks authority to enforce its interpretation of CALEA without first adopting implementing rules through notice-and-comment rulemaking. CALEA section 108 commits enforcement authority to the courts, not the FCC. The Commission noted that when it previously wanted to enforce CALEA requirements, it codified them as rules in 2006 specifically to gain enforcement authority.

"Switching Premises" Limitation: Section 105 explicitly refers to interceptions "effected within its switching premises," but the ruling appeared to impose obligations across carriers' entire networks. The Commission found this expansion ignored clear statutory limits.

"Interception" Definition: CALEA incorporates the Wiretap Act's definition of "intercept," which courts have consistently interpreted as limited to communications intercepted contemporaneously with transmission—not stored data. The ruling's required practices target both data in transit and at rest, exceeding section 105's scope.

"It was unlawful because the FCC purported to read a statute that required telecommunications carriers to allow lawful wiretaps within a certain portion of their network as a provision that required carriers to adopt specific network management practices in every portion of their network," the new order states.

The Voluntary Approach of Provider Commitments

Rather than mandated requirements, the FCC pointed to voluntary commitments from communications providers following collaborative engagement throughout 2025. In an October 16 ex parte filing, industry associations detailed "extensive, urgent, and coordinated efforts to mitigate operational risks, protect consumers, and preserve national security interests.

These voluntary measures include:

  • Accelerated patching cycles for outdated or vulnerable equipment
  • Updated and reviewed access controls
  • Disabled unnecessary outbound connections to limit lateral network movement
  • Improved threat-hunting efforts
  • Increased cybersecurity information sharing with federal government and within the communications sector
  • Establishment of the Communications Cybersecurity Information Sharing and Analysis Center (C2 ISAC) for real-time threat intelligence sharing
  • New collaboration forum for Chief Information Security Officers from U.S. and Canadian providers

The government-industry partnership model of collaboration has enabled communications providers to respond swiftly and agilely to Salt Typhoon, reduce vulnerabilities exposed by the attack, and bolster network cyber defenses," the industry associations stated.

Salt Typhoon Context

The Salt Typhoon attacks, disclosed in September 2024, involved a PRC-sponsored advanced persistent threat group infiltrating U.S. communications companies as part of a massive espionage campaign affecting dozens of countries. Critically, the attacks exploited publicly known common vulnerabilities and exposures (CVEs) rather than zero-day vulnerabilities—meaning they targeted avoidable weaknesses rather than previously unknown flaws.

The FCC noted that following its engagement with carriers after Salt Typhoon, providers agreed to implement additional cybersecurity controls representing "a significant change in cybersecurity practices compared to the measures in place in January."

Also read: Salt Typhoon Cyberattack: FBI Investigates PRC-linked Breach of US Telecoms

Targeted Regulatory Actions Continue

While rescinding the broad CALEA interpretation, the FCC emphasized it continues pursuing targeted cybersecurity regulations in specific areas where it has clear legal authority:

  • Rules requiring submarine cable licensees to create and implement cybersecurity risk management plans
  • Rules ensuring test labs and certification bodies in the equipment authorization program aren't controlled by foreign adversaries
  • Investigations of Chinese Communist Party-aligned businesses whose equipment appears on the FCC's Covered List
  • Proceedings to revoke authorizations for entities like HKT (International) Limited over national security concerns

"The Commission is leveraging the full range of the Commission's regulatory, investigatory, and enforcement authorities to protect Americans and American companies from foreign adversaries," the order states, while maintaining that collaboration with carriers coupled with targeted, legally robust regulatory and enforcement measures, has proven successful.

The FCC also set to withdraw the Notice of Proposed Rulemaking that accompanied the January Declaratory Ruling, which would have proposed specific cybersecurity requirements for a broad array of service providers. The NPRM was never published in the Federal Register, so the public comment period never commenced.

The Commission's new approach reflects a bet that voluntary industry cooperation, supported by targeted regulations in specific high-risk areas, will likely prove more effective than sweeping mandates of questionable legal foundation.

❌