Normal view

Received yesterday — 12 December 2025

New laws to be considered after ‘harrowing stories’ from ex-Vodafone franchisees

12 December 2025 at 02:00

Concerns about power imbalance in franchise agreements amid claims over firm’s treatment of small-business owners

The government will consider new laws to correct the power imbalance in franchise agreements in response to the “harrowing stories” of small business people running Vodafone stores.

The move follows allegations of suicide and attempted suicide among shopkeepers who had agreed to deals to run retail outlets for the £18bn telecoms company, which were revealed by the Guardian on Monday.

Continue reading...

© Photograph: Andy Rain/EPA

© Photograph: Andy Rain/EPA

© Photograph: Andy Rain/EPA

City of Cambridge Advises Password Reset After Nationwide CodeRED Data Breach

12 December 2025 at 00:56

City of Cambridge

The City of Cambridge has released an important update regarding the OnSolve CodeRED emergency notifications system, also known locally as Cambridge’s reverse 911 system. The platform, widely used by thousands of local governments and public safety agencies across the country, was taken offline in November following a nationwide OnSolve CodeRED cyberattack. Residents who rely on CodeRED alerts for information about snow emergencies, evacuations, water outages, or other service disruptions are being asked to take immediate steps to secure their accounts and continue receiving notifications.

Impact of the OnSolve CodeRED Cyberattack on User Data

According to city officials, the data breach affected CodeRED databases nationwide, including Cambridge. The compromised information may include phone numbers, email addresses, and passwords of registered users. Importantly, the attack targeted the OnSolve CodeRED system itself, not the City of Cambridge or its departments. This OnSolve CodeRED cyberattack incident mirrors similar concerns raised in Monroe County, Georgia, where officials confirmed that residents’ personal information was also exposed. The Monroe County Emergency Management Agency emphasized that the breach was part of a nationwide cybersecurity incident and not a local failure.

Transition to CodeRED by Crisis24

In response, OnSolve permanently decommissioned the old CodeRED platform and migrated services to a new, secure environment known as CodeRED by Crisis24. The new system has undergone comprehensive security audits, including penetration testing and system hardening, to ensure stronger protection against future threats. For Cambridge residents, previously registered contact information has been imported into the new platform. However, due to security concerns, all passwords have been removed. Users must now reset their credentials before accessing their accounts.

Steps for City of Cambridge Residents and Users

To continue receiving emergency notifications, residents should:
  • Visit accountportal.onsolve.net/cambridgema
  • Enter their username (usually an email address)
  • Select “forgot password” to verify and reset credentials
  • If unsure of their username, use the “forgot username” option
Officials strongly advise against reusing old CodeRED passwords, as they may have been compromised. Instead, users should create strong, unique passwords and update their information once logged in. Additionally, anyone who used the same password across multiple accounts is urged to change those credentials immediately to reduce the risk of further exposure.

Broader National Context

The Monroe County cyberattack highlights the scale of the issue. Officials there reported that data such as names, addresses, phone numbers, and passwords were compromised. Residents who enrolled before March 31, 2025, had their information migrated to the new Crisis24 CodeRED platform, while those who signed up afterward must re‑enroll. OnSolve has reassured communities that the intrusion was contained within the original system and did not spread to other networks. While there is currently no evidence of identity theft, the incident underscores the growing risks of cyber intrusions nationwide.

Resources for Cybersecurity Protection

Residents who believe they may have been victims of cyber‑enabled fraud are encouraged to report incidents to the FBI Internet Crime Complaint Center (IC3) at ic3.gov. Additional resources are available to help protect individuals and families from fraud and cybercrime. Security experts note that the rising frequency of attacks highlights the importance of independent threat‑intelligence providers. Companies such as Cyble track vulnerabilities and cybercriminal activity across global networks, offering organizations tools to strengthen defenses and respond more quickly to incidents.

Looking Ahead

The City of Cambridge has thanked residents for their patience as staff worked with OnSolve to restore emergency alert capabilities. Officials emphasized that any breach of security is a serious concern and confirmed that they will continue monitoring the new CodeRED by Crisis24 platform to ensure its standards are upheld. In addition, the City is evaluating other emergency alerting systems to determine the most effective long‑term solution for community safety.
Received before yesterday

AIs Exploiting Smart Contracts

11 December 2025 at 12:06

I have long maintained that smart contracts are a dumb idea: that a human process is actually a security feature.

Here’s some interesting research on training AIs to automatically exploit smart contracts:

AI models are increasingly good at cyber tasks, as we’ve written about before. But what is the economic impact of these capabilities? In a recent MATS and Anthropic Fellows project, our scholars investigated this question by evaluating AI agents’ ability to exploit smart contracts on Smart CONtracts Exploitation benchmark (SCONE-bench)­a new benchmark they built comprising 405 contracts that were actually exploited between 2020 and 2025. On contracts exploited after the latest knowledge cutoffs (June 2025 for Opus 4.5 and March 2025 for other models), Claude Opus 4.5, Claude Sonnet 4.5, and GPT-5 developed exploits collectively worth $4.6 million, establishing a concrete lower bound for the economic harm these capabilities could enable. Going beyond retrospective analysis, we evaluated both Sonnet 4.5 and GPT-5 in simulation against 2,849 recently deployed contracts without any known vulnerabilities. Both agents uncovered two novel zero-day vulnerabilities and produced exploits worth $3,694, with GPT-5 doing so at an API cost of $3,476. This demonstrates as a proof-of-concept that profitable, real-world autonomous exploitation is technically feasible, a finding that underscores the need for proactive adoption of AI for defense...

The post AIs Exploiting Smart Contracts appeared first on Security Boulevard.

Federal Grand Jury Charges Former Manager with Government Contractor Fraud

11 December 2025 at 04:16

Government Contractor Fraud

Government contractor fraud is at the heart of a new indictment returned by a federal grand jury in Washington, D.C. against a former senior manager in Virginia. Prosecutors say Danielle Hillmer, 53, of Chantilly, misled federal agencies for more than a year about the security of a cloud platform used by the U.S. Army and other government customers. The indictment, announced yesterday, charges Hillmer with major government contractor fraud, wire fraud, and obstruction of federal audits. According to prosecutors, she concealed serious weaknesses in the system while presenting it as fully compliant with strict federal cybersecurity standards.

Government Contractor Fraud: Alleged Scheme to Mislead Agencies

According to court documents, Hillmer’s actions spanned from March 2020 through November 2021. During this period, she allegedly obstructed auditors and misrepresented the platform’s compliance with the Federal Risk and Authorization Management Program (FedRAMP) and the Department of Defense’s Risk Management Framework. The indictment claims that while the platform was marketed as a secure environment for federal agencies, it lacked critical safeguards such as access controls, logging, and monitoring. Despite repeated warnings, Hillmer allegedly insisted the system met the FedRAMP High baseline and DoD Impact Levels 4 and 5, both of which are required for handling sensitive government data.

Obstruction of Audits

Federal prosecutors allege Hillmer went further by attempting to obstruct third-party assessors during audits in 2020 and 2021. She is accused of concealing deficiencies and instructing others to hide the true state of the system during testing and demonstrations. The indictment also states that Hillmer misled the U.S. Army to secure sponsorship for a Department of Defense provisional authorization. She allegedly submitted, and directed others to submit, authorization materials containing false information to assessors, authorizing officials, and government customers. These misrepresentations, prosecutors say, allowed the contractor to obtain and maintain government contracts under false pretenses.

Charges and Potential Penalties

Hillmer faces two counts of wire fraud, one count of major government fraud, and two counts of obstruction of a federal audit. If convicted, she could face:
  • Up to 20 years in prison for each wire fraud count
  • Up to 10 years in prison for major government fraud
  • Up to 5 years in prison for each obstruction count
A federal district court judge will determine any sentence after considering the U.S. Sentencing Guidelines and other statutory factors. The indictment was announced by Acting Assistant Attorney General Matthew R. Galeotti of the Justice Department’s Criminal Division and Deputy Inspector General Robert C. Erickson of the U.S. General Services Administration Office of Inspector General (GSA-OIG). The case is being investigated by the GSA-OIG, the Defense Criminal Investigative Service, the Naval Criminal Investigative Service, and the Department of the Army Criminal Investigation Division. Trial Attorneys Lauren Archer and Paul Hayden of the Criminal Division’s Fraud Section are prosecuting the case.

Broader Implications of Government Contractor Fraud

The indictment highlights ongoing concerns about the integrity of cloud platforms used by federal agencies. Programs like FedRAMP and the DoD’s Risk Management Framework are designed to ensure that systems handling sensitive government data meet rigorous security standards. Allegations that a contractor misrepresented compliance raise questions about oversight and the risks posed to national security when platforms fall short of requirements. Federal officials emphasized that the government contractor fraud case highlights the importance of transparency and accountability in government contracting, particularly in areas involving cybersecurity. Note: It is important to note that an indictment is merely an allegation. Hillmer, like all defendants, is presumed innocent until proven guilty beyond a reasonable doubt in a court of law.

Ring-fencing AI Workloads for NIST and ISO Compliance 

10 December 2025 at 12:32

AI is transforming enterprise productivity and reshaping the threat model at the same time. Unlike human users, agentic AI and autonomous agents operate at machine speed and inherit broad network permissions and embedded credentials. This creates new security and compliance … Read More

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on 12Port.

The post Ring-fencing AI Workloads for NIST and ISO Compliance  appeared first on Security Boulevard.

Australia’s Social Media Ban for Kids: Protection, Overreach or the Start of a Global Shift?

10 December 2025 at 04:23

ban on social media

On a cozy December morning, as children in Australia set their bags aside for the holiday season and held their tabs and phones in hand to take that selfie and announce to the world they were all set for the fun to begin, something felt a miss. They couldn't access their Snap Chat and Instagram accounts. No it wasn't another downtime caused by a cyberattack, because they could see their parents lounging on the couch and laughing at the dog dance reels. So why were they not able to? The answer: the ban on social media for children under 16 had officially taken effect. It wasn't just one or 10 or 100 but more than one million young users who woke up locked out of their social media. No TikTok scroll. No Snapchat streak. No YouTube comments. Australia had quietly entered a new era, the world’s first nationwide ban on social media for children under 16, effective December 10. The move has initiated global debate, parental relief, youth frustration, and a broader question: Is this the start of a global shift, or a risky social experiment? Prime Minister Anthony Albanese was clear about why his government took this unparalleled step. “Social media is doing harm to our kids, and I’m calling time on it,” he said during a press conference. “I’ve spoken to thousands of parents… they’re worried sick about the safety of our kids online, and I want Australian families to know that the Government has your back.” Under the Anthony Albanese social media policy, platforms including Instagram, Facebook, X, Snapchat, TikTok, Reddit, Twitch, Kick, Threads and YouTube must block users under 16, or face fines of up to AU$32 million. Parents and children won’t be penalized, but tech companies will. [caption id="attachment_107569" align="aligncenter" width="448"]Australia ban Social Media Source: eSafety Commissioner[/caption]

Australia's Ban on Social Media: A Big Question

Albanese pointed to rising concerns about the effects of social media on children, from body-image distortion to exposure to inappropriate content and addictive algorithms that tug at young attention spans. [caption id="attachment_107541" align="aligncenter" width="960"]Ban on social media Source: Created using Google Gemini[/caption] Research supports these concerns. A Pew Research Center study found:
  • 48% of teens say social media has a mostly negative effect on people their age, up sharply from 32% in 2022.
  • 45% feel they spend too much time on social media.
  • Teen girls experience more negative impacts than boys, including mental health struggles (25% vs 14%) and loss of confidence (20% vs 10%).
  • Yet paradoxically, 74% of teens feel more connected to friends because of social media, and 63% use it for creativity.
These contradictions make the issue far from black and white. Psychologists remind us that adolescence, beginning around age 10 and stretching into the mid-20s, is a time of rapid biological and social change, and that maturity levels vary. This means that a one-size-fits-all ban on social media may overshoot the mark.

Ban on Social Media for Users Under 16: How People Reacted

Australia’s announcement, first revealed in November 2024, has motivated countries from Malaysia to Denmark to consider similar legislation. But not everyone is convinced this is the right way forward.

Supporters Applaud “A Chance at a Real Childhood”

Pediatric occupational therapist Cris Rowan, who has spent 22 years working with children, celebrated the move: “This may be the first time children have the opportunity to experience a real summer,” she said.“Canada should follow Australia’s bold initiative. Parents and teachers can start their own movement by banning social media from homes and schools.” Parents’ groups have also welcomed the decision, seeing it as a necessary intervention in a world where screens dominate childhood.

Others Say the Ban Is Imperfect, but Necessary

Australian author Geoff Hutchison puts it bluntly: “We shouldn’t look for absolutes. It will be far from perfect. But we can learn what works… We cannot expect the repugnant tech bros to care.” His view reflects a broader belief that tech companies have too much power, and too little accountability.

Experts Warn Against False Security 

However, some experts caution that the Australia ban on social media may create the illusion of safety while failing to address deeper issues. Professor Tama Leaver, Internet Studies expert at Curtin University, told The Cyber Express that while the ban on social media addresses some risks, such as algorithmic amplification of inappropriate content and endless scrolling, many online dangers remain.

“The social media ban only really addresses on set of risks for young people, which is algorithmic amplification of inappropriate content and the doomscrolling or infinite scroll. Many risks remain. The ban does nothing to address cyberbullying since messaging platforms are exempt from the ban, so cyberbullying will simply shift from one platform to another.”

Leaver also noted that restricting access to popular platforms will not drive children offline. Due to ban on social media young users will explore whatever digital spaces remain, which could be less regulated and potentially riskier.

“Young people are not leaving the digital world. If we take some apps and platforms away, they will explore and experiment with whatever is left. If those remaining spaces are less known and more risky, then the risks for young people could definitely increase. Ideally the ban will lead to more conversations with parents and others about what young people explore and do online, which could mitigate many of the risks.”

From a broader perspective, Leaver emphasized that the ban on social media will only be fully beneficial if accompanied by significant investment in digital literacy and digital citizenship programs across schools:

“The only way this ban could be fully beneficial is if there is a huge increase in funding and delivery of digital literacy and digital citizenship programs across the whole K-12 educational spectrum. We have to formally teach young people those literacies they might otherwise have learnt socially, otherwise the ban is just a 3 year wait that achieves nothing.”

He added that platforms themselves should take a proactive role in protecting children:

“There is a global appetite for better regulation of platforms, especially regarding children and young people. A digital duty of care which requires platforms to examine and proactively reduce or mitigate risks before they appear on platforms would be ideal, and is something Australia and other countries are exploring. Minimizing risks before they occur would be vastly preferable to the current processes which can only usually address harm once it occurs.”

Looking at the global stage, Leaver sees Australia ban on social media as a potential learning opportunity for other nations:

“There is clearly global appetite for better and more meaningful regulation of digital platforms. For countries considered their own bans, taking the time to really examine the rollout in Australia, to learn from our mistakes as much as our ambitions, would seem the most sensible path forward.”

Other specialists continue to warn that the ban on social media could isolate vulnerable teenagers or push them toward more dangerous, unregulated corners of the internet.

Legal Voices Raise Serious Constitutional Questions

Senior Supreme Court Advocate Dr. K. P. Kylasanatha Pillay offered a thoughtful reflection: “Exposure of children to the vagaries of social media is a global concern… But is a total ban feasible? We must ask whether this is a reasonable restriction or if it crosses the limits of state action. Not all social media content is harmful. The best remedy is to teach children awareness.” His perspective reflects growing debate about rights, safety, and state control.

LinkedIn, Reddit, and the Public Divide

Social media itself has become the battleground for reactions. On Reddit, youngesters were particularly vocal about the ban on social media. One teen wrote: “Good intentions, bad execution. This will make our generation clueless about internet safety… Social media is how teenagers express themselves. This ban silences our voices.” Another pointed out the easy loophole: “Bypassing this ban is as easy as using a free VPN. Governments don’t care about safety — they want control.” But one adult user disagreed: “Everyone against the ban seems to be an actual child. I got my first smartphone at 20. My parents were right — early exposure isn’t always good.” This generational divide is at the heart of the debate.

Brands, Marketers, and Schools Brace for Impact

Bindu Sharma, Founder of World One Consulting, highlighted the global implications: “Ten of the biggest platforms were ordered to block children… The world is watching how this plays out.” If the ban succeeds, brands may rethink how they target younger audiences. If it fails, digital regulation worldwide may need reimagining.

Where Does This Leave the World?

Australia’s decision to ban social media for children under 16 is bold, controversial, and rooted in good intentions. It could reshape how societies view childhood, technology, and digital rights. But as critics note, ban on social media platforms can also create unintended consequences, from delinquency to digital illiteracy. What’s clear is this: Australia has started a global conversation that’s no longer avoidable. As one LinkedIn user concluded: “Safety of the child today is assurance of the safety of society tomorrow.”

Cultural Lag Leaves Security as the Weakest Link

5 December 2025 at 11:19
cybersecurity

For too long, security has been cast as a bottleneck – swooping in after developers build and engineers test to slow things down. The reality is blunt; if it’s bolted on, you’ve already lost. The ones that win make security part of every decision, from the first line of code to the last boardroom conversation...

The post Cultural Lag Leaves Security as the Weakest Link appeared first on Security Boulevard.

European Court Imposes Strict New Data Checks on Online Marketplace Ads

3 December 2025 at 00:34

CJEU ruling

The CJEU ruling by the Court of Justice of the European Union on Tuesday has made it clear that online marketplaces are responsible for the personal data that appears in advertisements on their platforms. The Court of Justice of the European Union decision makes clear that platforms must get consent from any person whose data is shown in an advertisement, and must verify ads before they go live, especially where sensitive data is involved. The CJEU ruling comes from a 2018 incident in Romania. A fake advertisement on the classifieds website publi24.ro claimed a woman was offering sexual services. The post included her photos and phone number, which were used without her permission. The operator of the site, Russmedia Digital, removed the ad within an hour, but by then it had already been copied to other websites. The woman said the ad harmed her privacy and reputation and took the company to court. Lower courts in Romania gave different decisions, so the case was referred to the Court of Justice of the European Union for clarity. The CJEU has now confirmed that online marketplaces are data controllers under the GDPR for the personal data contained in ads on their sites.

CJEU Ruling: What Online Marketplaces Must Do Now

The court said that marketplace operators must take more responsibility and cannot rely on old rules that protect hosting services from liability. From now on, platforms must:
  • Check ads before publishing them when they contain personal or sensitive data.
  • Confirm that the person posting the ad is the same person shown in the ad, or make sure the person shown has given explicit consent.
  • Refuse ads if consent or identity verification cannot be confirmed.
  • Put measures in place to help prevent sensitive ads from being copied and reposted on other websites.
These steps must be part of the platform’s regular technical and organisational processes to comply with the GDPR.

What This Means for Platforms Across The EU

Legal teams at Pinsent Masons warned the decision “will likely have major implications for data protection across the 27 member states.” Nienke Kingma of Pinsent Masons said the ruling is important for compliance, adding it is “setting a new standard for data protection compliance across the EU.” Thijs Kelder, also at Pinsent Masons, said: “This judgment makes clear that online marketplaces cannot avoid their obligations under the GDPR,” and noted the decision “increases the operational risks on these platforms,” meaning companies will need stronger risk management. Daphne Keller of Stanford Law School warned about wider effects on free expression and platform design, noting the ruling “has major implications for free expression and access to information, age verification and privacy.”

Practical Impact

The CJEU ruling decision marks a major shift in how online marketplaces must operate. Platforms that allow users to post adverts will now have to rethink their processes, from verifying identities and checking personal data before an ad goes live to updating their terms and investing in new technical controls. Smaller platforms may feel the pressure most, as the cost of building these checks could be significant. What happens next will depend on how national data protection authorities interpret the ruling and how quickly companies can adapt. The coming months will reveal how verification should work in practice, what measures count as sufficient protection against reposting, and how platforms can balance these new duties with user privacy and free expression. The ruling sets a strict new standard, and its real impact will become clearer as regulators, courts, and platforms begin to implement it.

Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity

2 December 2025 at 12:35
security, risk, vector

Organizations are spending more than ever on cybersecurity, layering defenses around networks, endpoints, and applications. Yet a company’s documents, one of the most fundamental business assets, remains an overlooked weak spot. Documents flow across every department, cross company boundaries, and often contain the very data that compliance officers and security teams work hardest to protect...

The post Closing the Document Security Gap: Why Document Workflows Must Be Part of Cybersecurity appeared first on Security Boulevard.

Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

2 December 2025 at 11:38

APT31, Australian Parliament, AI Safety Institute, National AI Plan

Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.

The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.

The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.

Dual Focus on Upstream Risks and Downstream Harms

The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.

Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.

The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.

Supporting Coordinated Regulatory Response

The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.

Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.

The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.

Addressing Specific AI Harms

The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.

The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.

The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.

Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.

Also read: CPA Australia Warns: AI Adoption Accelerates Cyber Risks for Australian Businesses

National Security and Crisis Response

The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.

AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.

The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

International Engagement

The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.

Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.

The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.

Also read: UK’s AI Safety Institute Establishes San Francisco Office for Global Expansion

GPS Spoofing Detected Across Major Indian Airports; Government Tightens Security

2 December 2025 at 00:37

GPS Spoofing

The Union government of India, the country’s central federal administration, on Monday confirmed several instances of GPS spoofing near Delhi’s Indira Gandhi International Airport (IGIA) and other major airports. Officials said that despite the interference, all flights continued to operate safely and without disruption. The clarification came after reports pointed to digital interference affecting aircraft navigation systems during approach procedures at some of the busiest airports in the country.

What Is GPS Spoofing?

GPS spoofing is a form of signal interference where false Global Positioning System (GPS) signals are broadcast to mislead navigation systems. For aircraft, it can temporarily confuse onboard systems about their true location or altitude. While pilots and air traffic controllers are trained to manage such situations, repeated interference requires immediate reporting and stronger safeguards.

Government Confirms Incidents at Multiple Airports

India’s Civil Aviation Minister Ram Mohan Naidu informed Parliament that several flights approaching Delhi reported GPS spoofing while using satellite-based landing procedures on Runway 10. In a written reply to the Rajya Sabha, the minister confirmed that similar signal interference reports have been received from several India’s major airports, including Mumbai, Kolkata, Hyderabad, Bengaluru, Amritsar, and Chennai. He explained that when GPS spoofing was detected in Delhi, contingency procedures were activated for flights approaching the affected runway. The rest of the airport continued functioning normally through conventional ground-based navigation systems, preventing any impact on overall flight operations.

Safety Procedures and New Reporting System

The Directorate General of Civil Aviation (DGCA) has issued a Standard Operating Procedure (SOP) for real-time reporting of GPS spoofing and Global Navigation Satellite System (GNSS) interference around IGI Airport. The minister added that since DGCA made reporting mandatory in November 2023, regular interference alerts have been received from major airports across the country. These reports are helping regulators identify patterns and respond more quickly to any navigation-related disturbances. India continues to maintain a network of traditional navigation and surveillance systems such as Instrument Landing Systems (ILS) and radar. These systems act as dependable backups if satellite-based navigation is interrupted, following global aviation best practices.

Airports on High Cyber Vigilance

The government said India is actively engaging with global aviation bodies to stay updated on the latest technologies, methods, and safety measures related to aviation cybersecurity. Meanwhile, the Airports Authority of India (AAI) is deploying advanced cybersecurity tools across its IT infrastructure to strengthen protection against potential digital threats. Although the cyber-related interference did not affect flight schedules, the confirmation of GPS spoofing attempts at major airports has led to increased monitoring across key aviation hubs. These airports handle millions of passengers every year, making continuous vigilance essential.

Recent Aviation Challenges

The GPS spoofing reports come shortly after a separate system failure at Delhi Airport in November, which caused major delays. That incident was later linked to a technical issue with the Automatic Message Switching System (AMSS) and was not related to cyber activity. The aviation sector also faced another challenge recently when Airbus A320 aircraft required an urgent software update. The A320, widely used in India, led to around 388 delayed flights on Saturday. All Indian airlines completed the required updates by Sunday, allowing normal operations to resume. Despite reports of interference, the Union government emphasised that there was no impact on passenger safety or flight operations. Established procedures, trained crews, and reliable backup systems ensured that aircraft continued operating normally. Authorities said they will continue monitoring navigation systems closely and strengthening cybersecurity measures across airports to safeguard India’s aviation network.

Cybersecurity Coalition to Government: Shutdown is Over, Get to Work

28 November 2025 at 13:37
budget open source supply chain cybersecurity ransomware White House Cyber Ops

The Cybersecurity Coalition, an industry group of almost a dozen vendors, is urging the Trump Administration and Congress now that the government shutdown is over to take a number of steps to strengthen the country's cybersecurity posture as China, Russia, and other foreign adversaries accelerate their attacks.

The post Cybersecurity Coalition to Government: Shutdown is Over, Get to Work appeared first on Security Boulevard.

EU Reaches Agreement on Child Sexual Abuse Detection Law After Three Years of Contentious Debate

27 November 2025 at 13:47

Child Sexual Abuse

That lengthy standoff over privacy rights versus child protection ended Wednesday when EU member states finally agreed on a negotiating mandate for the Child Sexual Abuse Regulation, a controversial law requiring online platforms to detect, report, and remove child sexual abuse material while critics warn the measures could enable mass surveillance of private communications.

The Council agreement, reached despite opposition from the Czech Republic, Netherlands, and Poland, clears the way for trilogue negotiations with the European Parliament to begin in 2026 on legislation that would permanently extend voluntary scanning provisions and establish a new EU Centre on Child Sexual Abuse.

The Council introduces three risk categories of online services based on objective criteria including service type, with authorities able to oblige online service providers classified in the high-risk category to contribute to developing technologies to mitigate risks relating to their services. The framework shifts responsibility to digital companies to proactively address risks on their platforms.

Permanent Extension of Voluntary Scanning

One significant provision permanently extends voluntary scanning, a temporary measure first introduced in 2021 that allows companies to voluntarily scan for child sexual abuse material without violating EU privacy laws. That exemption was set to expire in April 2026 under current e-Privacy Directive provisions.

At present, providers of messaging services may voluntarily check content shared on their platforms for online child sexual abuse material, then report and remove it. According to the Council position, this exemption will continue to apply indefinitely under the new law.

Danish Justice Minister Peter Hummelgaard welcomed the Council's agreement, stating that the spread of child sexual abuse material is "completely unacceptable." "Every year, millions of files are shared that depict the sexual abuse of children. And behind every single image and video, there is a child who has been subjected to the most horrific and terrible abuse," Hummelgaard said.

New EU Centre on Child Sexual Abuse

The legislation provides for establishment of a new EU agency, the EU Centre on Child Sexual Abuse, to support implementation of the regulation. The Centre will act as a hub for child sexual abuse material detection, reporting, and database management, receiving reports from providers, assessing risk levels across platforms, and maintaining a database of indicators.

The EU Centre will assess and process information supplied by online providers about child sexual abuse material identified on services, creating, maintaining and operating a database for reports submitted by providers. The Centre will share information from companies with Europol and national law enforcement bodies, supporting national authorities in assessing the risk that online services could be used to spread abuse material.

Online companies must provide assistance for victims who would like child sexual abuse material depicting them removed or for access to such material disabled. Victims can ask for support from the EU Centre, which will check whether companies involved have removed or disabled access to items victims want taken down.

Privacy Concerns and Opposition

The breakthrough comes after months of stalled negotiations and a postponed October vote when Germany joined a blocking minority opposing what critics commonly call "chat control." Berlin argued the proposal risked "unwarranted monitoring of chats," comparing it to opening letters from other correspondents.

Critics from Big Tech companies and data privacy NGOs warn the measures could pave the way for mass surveillance, as private messages would be scanned by authorities to detect illegal images. The Computer and Communications Industry Association stated that EU member states made clear the regulation can only move forward if new rules strike a true balance protecting minors while maintaining confidentiality of communications, including end-to-end encryption.

Also read: EU Chat Control Proposal to Prevent Child Sexual Abuse Slammed by Critics

Former Pirate MEP Patrick Breyer, who has been advocating against the file, characterized the Council endorsement as "a Trojan Horse" that legitimizes warrantless, error-prone mass surveillance of millions of Europeans by US corporations through cementing voluntary mass scanning.

The European Parliament's study heavily critiqued the Commission's proposal, concluding there aren't currently technological solutions that can detect child sexual abuse material without resulting in high error rates affecting all messages, files and data in platforms. The study also concluded the proposal would undermine end-to-end encryption and security of digital communications.

Scope of the Crisis

Statistics underscore the urgency. 20.5 million reports and 63 million files of abuse were submitted to the National Center for Missing and Exploited Children CyberTipline last year, with online grooming increasing 300 percent since negotiations began. Every half second, an image of a child being sexually abused is reported online.

Sixty-two percent of abuse content flagged by the Internet Watch Foundation in 2024 was traced to EU servers, with at least one in five children in Europe a victim of sexual abuse.

The Council position allows trilogue negotiations with the European Parliament and Commission to start in 2026. Those negotiations need to conclude before the already postponed expiration of the current e-Privacy regulation that allows exceptions under which companies can conduct voluntary scanning. The European Parliament reached its negotiating position in November 2023.

Account Takeover Scams Surge as FBI Reports Over $262 Million in Losses

26 November 2025 at 00:34

Account Takeover fraud

The Account Takeover fraud threat is accelerating across the United States, prompting the Federal Bureau of Investigation (FBI) to issue a new alert warning individuals, businesses, and organizations of all sizes to stay vigilant. According to the FBI Internet Crime Complaint Center (IC3), more than 5,100 complaints related to ATO fraud have been filed since January 2025, with reported losses exceeding $262 million. The bureau warns that cyber criminals are increasingly impersonating financial institutions to steal money or sensitive information. As the annual Black Friday sale draws millions of shoppers online, the FBI notes that the surge in digital purchases creates an ideal environment for Account Takeover fraud. With consumers frequently visiting unfamiliar retail websites and acting quickly to secure limited-time deals, cyber criminals deploy fake customer support calls, phishing pages, and fraudulent ads disguised as payment or discount portals. The increased online activity during Black Friday makes it easier for attackers to blend in and harder for victims to notice red flags, making the shopping season a lucrative window for ATO scams.

How Account Takeover Fraud Works

In an ATO scheme, cyber criminals gain unauthorized access to online financial, payroll, or health savings accounts. Their goal is simple: steal funds or gather personal data that can be reused for additional fraudulent activities. The FBI notes that these attacks often start with impersonation, either of a financial institution’s staff, customer support teams, or even the institution’s official website. To carry out their schemes, criminals rely heavily on social engineering and phishing websites designed to look identical to legitimate portals. These tactics create a false sense of trust, encouraging account owners to unknowingly hand over their login credentials.

Social Engineering Tactics Increase in Frequency

The FBI highlights that most ATO cases begin with social engineering, where cyber criminals manipulate victims into sharing sensitive information such as passwords, multi-factor authentication (MFA) codes, or one-time passcodes (OTP). Common techniques include:
  • Fraudulent text messages, emails, or calls claiming unusual activity or unauthorized charges. Victims are often directed to click on phishing links or speak to fake customer support representatives.
  • Attackers posing as bank employees or technical support agents who convince victims to share login details under the guise of preventing fraudulent transactions.
  • Scenarios where cyber criminals claim the victim’s identity was used to make unlawful purchases—sometimes involving firearms, and escalate the scam by introducing another impersonator posing as law enforcement.
Once armed with stolen credentials, criminals reset account passwords and gain full control, locking legitimate users out of their own accounts.

Phishing Websites and SEO Poisoning Drive More Losses

Another growing trend is the use of sophisticated phishing domains and websites that perfectly mimic authentic financial institution portals. Victims believe they are logging into their bank or payroll system, but instead, they are handing their details directly to attackers. The FBI also warns about SEO poisoning, a method in which cyber criminals purchase search engine ads or manipulate search rankings to make fraudulent sites appear legitimate. When victims search for their bank online, these deceptive ads redirect them to phishing sites that capture their login information. Once attackers secure access, they rapidly transfer funds to criminal-controlled accounts—many linked to cryptocurrency wallets—making transactions difficult to trace or recover.

How to Stay Protected Against ATO Fraud

The FBI urges customers and businesses to take proactive measures to defend against ATO fraud attempts:
  • Limit personal information shared publicly, especially on social media.
  • Monitor financial accounts regularly for missing deposits, unauthorized withdrawals, or suspicious wire transfers.
  • Use unique, complex passwords and enable MFA on all accounts.
  • Bookmark financial websites and avoid clicking on search engine ads or unsolicited links.
  • Treat unexpected calls, emails, or texts claiming to be from a bank with skepticism.

What To Do If You Experience an Account Takeover

Victims of ATO fraud are advised to act quickly:
  1. Contact your financial institution immediately to request recalls or reversals, and report the incident to IC3.gov.
  2. Reset all compromised credentials, including any accounts using the same passwords.
  3. File a detailed complaint at IC3.gov with all relevant information, such as impersonated institutions, phishing links, emails, or phone numbers used.
  4. Notify the impersonated company so it can warn others and request fraudulent sites be taken down.
  5. Stay informed through updated alerts and advisories published on IC3.gov.

NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation

25 November 2025 at 03:06

SANTA CLARA, Calif., Nov 25, 2025 – Recently, NSFOCUS Generative Pre-trained Transformer (NSFGPT) and Intelligent Security Operations Platform (NSFOCUS ISOP) were recognized by the internationally renowned consulting firm Frost & Sullivan and won the 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation [1]. Frost & Sullivan Best Practices Recognition awards companies each year in […]

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on NSFOCUS, Inc., a global network and cyber security leader, protects enterprises and carriers from advanced cyber attacks..

The post NSFOCUS Receives International Recognition: 2025 Global Competitive Strategy Leadership for AI-Driven Security Operation appeared first on Security Boulevard.

Making A Cyber Crisis Plan! Key Components Not To Be Missed

22 November 2025 at 07:42

Do you think cyberattacks are headlines anymore? Given the frequent occurrences, it has turned headlines into a day-to-day reality, and that’s scarier! Organizations that are big names to small organizations that are still growing, every one of them is being hit one way or the other. From supply chain attacks to data breaches, the impact […]

The post Making A Cyber Crisis Plan! Key Components Not To Be Missed appeared first on Kratikal Blogs.

The post Making A Cyber Crisis Plan! Key Components Not To Be Missed appeared first on Security Boulevard.

Master how to report a breach for fast and effective cyber incident response

18 November 2025 at 04:09

For every organization, no matter the size or industry, the integrity and security of data is more crucial than ever as it faces the possibility of a cyber breach everyday. But what separates a company that bounces back quickly from one that suffers irreparable damage? The answer largely resides in how promptly and accurately the […]

The post Master how to report a breach for fast and effective cyber incident response first appeared on TrustCloud.

The post Master how to report a breach for fast and effective cyber incident response appeared first on Security Boulevard.

JWT Governance for SOC 2, ISO 27001, and GDPR — A Complete Guide

how proper JWT governance helps your organization stay compliant with SOC 2, ISO 27001, and GDPR. Explore best practices, governance frameworks, and how SSOJet ensures secure token management.

The post JWT Governance for SOC 2, ISO 27001, and GDPR — A Complete Guide appeared first on Security Boulevard.

OWASP Top 10 for 2025: What’s New and Why It Matters

17 November 2025 at 00:00

In this episode, we discuss the newly released OWASP Top 10 for 2025. Join hosts Tom Eston, Scott Wright, and Kevin Johnson as they explore the changes, the continuity, and the significance of the update for application security. Learn about the importance of getting involved with the release candidate to provide feedback and suggestions. The […]

The post OWASP Top 10 for 2025: What’s New and Why It Matters appeared first on Shared Security Podcast.

The post OWASP Top 10 for 2025: What’s New and Why It Matters appeared first on Security Boulevard.

💾

The Trojan Prompt: How GenAI is Turning Staff into Unwitting Insider Threats

14 November 2025 at 13:40
multimodal ai, AI agents, CISO, AI, Malware, DataKrypto, Tumeryk,

When a wooden horse was wheeled through the gates of Troy, it was welcomed as a gift but hid a dangerous threat. Today, organizations face the modern equivalent: the Trojan prompt. It might look like a harmless request: “summarize the attached financial report and point out any potential compliance issues.” Within seconds, a generative AI..

The post The Trojan Prompt: How GenAI is Turning Staff into Unwitting Insider Threats appeared first on Security Boulevard.

US Imposes Sanctions on Burma Over Cyber Scam Operations

13 November 2025 at 02:12

US Treasury Sanctions Burma

The US Treasury Sanctions Burma armed group and several related companies for their alleged involvement in cyber scam centers targeting American citizens. The Department of the Treasury’s Office of Foreign Assets Control (OFAC) announced the designations as part of a broader effort to combat organized crime, human trafficking, and cybercriminal activities operating out of Southeast Asia. According to the Treasury Department, OFAC has sanctioned the Democratic Karen Benevolent Army (DKBA), a Burmese armed group, and four of its senior leaders for supporting cyber scam centers in Burma. These operations reportedly defraud Americans through fraudulent investment schemes.

US Treasury Sanctions Burma: OFAC Targets Armed Group and Associated Firms

The agency also designated Trans Asia International Holding Group Thailand Company Limited, Troth Star Company Limited, and Thai national Chamu Sawang, citing links to Chinese organized crime networks. These entities were found to be working with the DKBA and other armed groups to establish and expand scam compounds in the region. Under Secretary of the Treasury for Terrorism and Financial Intelligence John K. Hurley stated, “criminal networks operating out of Burma are stealing billions of dollars from hardworking Americans through online scams.” He emphasized that such activities not only exploit victims financially but also contribute to Burma’s civil conflict by funding armed organizations.

Scam Center Strike Force Established

In coordination with agencies including the Federal Bureau of Investigation (FBI), U.S. Secret Service (USSS), and Department of Justice, a new Scam Center Strike Force has been launched to counter cyber scams originating from Burma, Cambodia, and Laos. This task force will focus on investigating and disrupting the most harmful Southeast Asian scam centers, while also supporting U.S. victims through education and restitution programs. The initiative aims to combine law enforcement, financial action, and diplomatic efforts to curb illicit online operations. [caption id="attachment_106706" align="aligncenter" width="432"]US Treasury Sanctions Burma Source: Department of the Treasury’s Office of Foreign Assets Control (OFAC)[/caption]

An Ongoing Effort to Protect Victims

The US Treasury Sanctions Burma action builds on previous measures targeting illicit actors in the region. Earlier in 2025, the Karen National Army (KNA) and several related companies were sanctioned for their roles in human trafficking and cyber scam activities. Additional designations in Cambodia and Burma followed, targeting groups such as the Prince Group and Huione Group for operating scam compounds and laundering proceeds from virtual currency investment scams. According to government reports, Americans lost over $10 billion in 2024 to Southeast Asia-based cyber scam operations, marking a 66 percent increase from the previous year.

Cyber Scams and Human Trafficking Links

Investigations revealed that many individuals working in scam centers are victims of human trafficking, coerced into online fraud through threats and violence. Some compounds, including Tai Chang and KK Park in Burma’s Karen State, are known hubs for cyber scams. The DKBA reportedly provides protection for these compounds while also participating in violent acts against trafficked workers. These scam networks often use messaging apps and fake investment platforms to deceive Americans. Victims are manipulated into transferring funds to scam-controlled accounts under the guise of legitimate investments.

Sanctions and Legal Implications

Following today’s actions, all property and interests of the designated individuals and entities within the United States are now blocked. The sanctions prohibit any U.S. person from engaging in transactions involving these blocked parties. Violations of OFAC regulations could lead to civil or criminal penalties. The US Treasury Sanctions Burma initiative underscores the United States’ continued commitment to disrupting global cyber scam operations, holding organized crime networks accountable, and safeguarding victims of human trafficking and financial exploitation.

OpenAI Battles Court Order to Indefinitely Retain User Chat Data in NYT Copyright Dispute

12 November 2025 at 11:40

NYT, ChatGPT, The New York Times, Voice Mode, OpenAI Voice Mode

The demand started at 1.4 billion conversations.

That staggering initial request from The New York Times, later negotiated down to 20 million randomly sampled ChatGPT conversations, has thrust OpenAI into a legal fight that security experts warn could fundamentally reshape data retention practices across the AI industry. The copyright infringement lawsuit has evolved beyond intellectual property disputes into a broader battle over user privacy, data governance, and the obligations AI companies face when litigation collides with privacy commitments.

OpenAI received a court preservation order on May 13, directing the company to retain all output log data that would otherwise be deleted, regardless of user deletion requests or privacy regulation requirements. District Judge Sidney Stein affirmed the order on June 26 after OpenAI appealed, rejecting arguments that user privacy interests should override preservation needs identified in the litigation.

Privacy Commitments Clash With Legal Obligations

The preservation order forces OpenAI to maintain consumer ChatGPT and API user data indefinitely, directly conflicting with the company's standard 30-day deletion policy for conversations users choose not to save. This requirement encompasses data from December 2022 through November 2024, affecting ChatGPT Free, Plus, Pro, and Team subscribers, along with API customers without Zero Data Retention agreements.

ChatGPT Enterprise, ChatGPT Edu, and business customers with Zero Data Retention contracts remain excluded from the preservation requirements. The order does not change OpenAI's policy of not training models on business data by default.

OpenAI implemented restricted access protocols, limiting preserved data to a small, audited legal and security team. The company maintains this information remains locked down and cannot be used beyond meeting legal obligations. No data will be turned over to The New York Times, the court, or external parties at this time.

Also read: OpenAI Announces Safety and Security Committee Amid New AI Model Development

Copyright Case Drives Data Preservation Demands

The New York Times filed its copyright infringement lawsuit in December 2023, alleging OpenAI illegally used millions of news articles to train large language models including ChatGPT and GPT-4. The lawsuit claims this unauthorized use constitutes copyright infringement and unfair competition, arguing OpenAI profits from intellectual property without permission or compensation.

The Times seeks more than monetary damages. The lawsuit demands destruction of all GPT models and training sets using its copyrighted works, with potential damages reaching billions of dollars in statutory and actual damages.

The newspaper's legal team argued their preservation request warranted approval partly because another AI company previously agreed to hand over 5 million private user chats in an unrelated case. OpenAI rejected this precedent as irrelevant to its situation.

Technical and Regulatory Complications

Complying with indefinite retention requirements presents significant engineering challenges. OpenAI must build systems capable of storing hundreds of millions of conversations from users worldwide, requiring months of development work and substantial financial investment.

The preservation order creates conflicts with international data protection regulations including GDPR. While OpenAI's terms of service allow data preservation for legal requirements—a point Judge Stein emphasized—the company argues The Times's demands exceed reasonable discovery scope and abandon established privacy norms.

OpenAI proposed several privacy-preserving alternatives, including targeted searches over preserved samples to identify conversations potentially containing New York Times article text. These suggestions aimed to provide only data relevant to copyright claims while minimizing broader privacy exposure.

Recent court modifications provided limited relief. As of September 26, 2025, OpenAI no longer must preserve all new chat logs going forward. However, the company must retain all data already saved under the previous order and maintain information from ChatGPT accounts flagged by The New York Times, with the newspaper authorized to expand its flagged user list while reviewing preserved records.

"Our long-term roadmap includes advanced security features designed to keep your data private, including client-side encryption for your messages with ChatGPT. We will build fully automated systems to detect safety issues in our products. Only serious misuse and critical risks—such as threats to someone’s life, plans to harm others, or cybersecurity threats—may ever be escalated to a small, highly vetted team of human reviewers." - Dane Stuckey, Chief Information Security Officer, OpenAI 

Implications for AI Governance

The case transforms abstract AI privacy concerns into immediate operational challenges affecting 400 million ChatGPT users. Security practitioners note the preservation order shatters fundamental assumptions about data deletion in AI interactions.

OpenAI CEO Sam Altman characterized the situation as accelerating needs for "AI privilege" concepts, suggesting conversations with AI systems should receive protections similar to attorney-client privilege. The company frames unlimited data preservation as setting dangerous precedents for AI communication privacy.

The litigation presents concerning scenarios for enterprise users integrating ChatGPT into applications handling sensitive information. Organizations using OpenAI's technology for healthcare, legal, or financial services must reassess compliance with regulations including HIPAA and GDPR given indefinite retention requirements.

Legal analysts warn this case likely invites third-party discovery attempts, with litigants in unrelated cases seeking access to adversaries' preserved AI conversation logs. Such developments would further complicate data privacy issues and potentially implicate attorney-client privilege protections.

The outcome will significantly impact how AI companies access and utilize training data, potentially reshaping development and deployment of future AI technologies. Central questions remain unresolved regarding fair use doctrine application to AI model training and the boundaries of discovery in AI copyright litigation.

Also read: OpenAI’s SearchGPT: A Game Changer or Pandora’s Box for Cybersecurity Pros?

UK Tightens Cyber Laws as Attacks Threaten Hospitals, Energy, and Transport

12 November 2025 at 00:44

Cyber Security and Resilience Bill

The UK government has unveiled the Cyber Security and Resilience Bill, a landmark move to strengthen UK cyber defences across essential public services, including healthcare, transport, water, and energy. The legislation aims to shield the nation’s critical national infrastructure from increasingly complex cyberattacks, which have cost the UK economy nearly £15 billion annually. According to the latest Cyble report — “Europe’s Threat Landscape: What 2025 Exposed and Why 2026 Could Be Worse”, Europe witnessed over 2,700 cyber incidents in 2025 across sectors such as BFSI, Government, Retail, and Energy. The report highlights how ransomware groups and politically motivated hacktivists have reshaped the regional threat landscape, emphasizing the urgency of unified cyber resilience strategies.

Cyber Security and Resilience Bill to Protect Critical National Infrastructure

At the heart of the new Cyber Security and Resilience Bill is the protection of vital services that people rely on daily. The legislation will ensure hospitals, water suppliers, and transport operators are equipped with stronger cyber resilience capabilities to prevent service disruptions and mitigate risks from future attacks. The Cyber Security and Resilience Bill will, for the first time, regulate medium and large managed service providers offering IT, cybersecurity, and digital support to organisations like the NHS. These providers will be required to report significant incidents promptly and maintain contingency plans for rapid recovery. Regulators will also gain authority to designate critical suppliers — such as diagnostic service providers or energy suppliers — and enforce minimum security standards to close supply chain gaps that cybercriminals could exploit. To strengthen compliance, enforcement will be modernised with turnover-based penalties for serious violations, ensuring cybersecurity remains a non-negotiable priority. The Technology Secretary will also have powers to direct organisations, including NHS Trusts and utilities, to take urgent actions to mitigate threats to national security.

UK Cyber Defences Face Mounting Pressure Amid Rising Attacks

Recent data shows the average cost of a significant cyberattack in the UK now exceeds £190,000, amounting to nearly £14.7 billion in total annual losses. The Office for Budget Responsibility (OBR) warns that a large-scale attack on critical national infrastructure could push borrowing up by £30 billion, equivalent to 1.1% of GDP. These findings align closely with Cyble’s Europe’s Threat Landscape report, which observed the rise of new ransomware groups like Qilin and Akira and a surge in pro-Russian hacktivism targeting European institutions through DDoS and defacement campaigns. The report also revealed that the retail sector accounted for 41% of all compromised access sales, demonstrating the widespread impact of evolving cybercrime tactics. Both the government and industry experts agree that defending against these threats requires a unified approach. National Cyber Security Centre (NCSC) CEO Dr. Richard Horne emphasised that “the real-world impacts of cyberattacks have never been more evident,” calling the Bill “a crucial step in protecting our most critical services.”

Building a Secure and Resilient Future

The Cyber Security and Resilience Bill represent a major shift in how the UK safeguards its people, economy, and digital ecosystem. By tightening cyber regulations for essential and digital services, the government seeks to reduce vulnerabilities and strengthen the UK’s cyber resilience posture for the years ahead. Industry leaders have welcomed the legislation. Darktrace CEO Jill Popelka praised the government’s initiative to modernise cyber laws in an era where attackers are leveraging AI-driven tools. Cisco UK’s CEO Sarah Walker also noted that only 8% of UK organisations are currently “mature” in their cybersecurity readiness, highlighting the importance of continuous improvement. Meanwhile, the Cyble report on Europe’s Threat Landscape warns that as state-backed operations merge with financially motivated attacks, 2026 could bring even more volatility. Cyble Research and Intelligence Labs recommend that organisations adopt intelligence-led defence strategies and proactive threat monitoring to stay ahead of emerging adversaries.

The Road Ahead

Both the Cyber Security and Resilience Bill and Cyble’s Europe’s Threat Landscape findings serve as a wake-up call: the UK and Europe are facing a new era of persistent cyber risks. Strengthening collaboration between government, regulators, and private industry will be key to securing critical systems and ensuring operational continuity. Organizations can explore deeper insights and practical recommendations from Cyble’s Europe’s Threat Landscape: What 2025 Exposed — and Why 2026 Could Be Worse report here, which provides detailed sectoral analysis and strategies to build a stronger, more resilient future against cyber threats.

Global GRC Platform Market Set to Reach USD 127.7 Billion by 2033

12 November 2025 at 00:36

GRC Platform Market

The GRC platform market is witnessing strong growth as organizations across the globe focus on strengthening governance, mitigating risks, and meeting evolving compliance demands. According to recent estimates, the market was valued at USD 49.2 billion in 2024 and is projected to reach USD 127.7 billion by 2033, growing at a CAGR of 11.18% between 2025 and 2033.

This GRC platform market growth reflects the increasing need to protect sensitive data, manage cyber risks, and streamline regulatory compliance processes.

Rising Need for Governance, Risk, and Compliance Solutions

As cyberthreats continue to rise, enterprises are turning to GRC platforms to gain centralized visibility into their risk posture. These solutions help organizations identify, assess, and respond to potential risks, ensuring stronger governance and reduced operational disruption.

The market’s momentum is also fueled by heightened regulatory scrutiny and the introduction of new compliance frameworks worldwide. Businesses are under pressure to maintain transparency, accuracy, and accountability in their governance and reporting processes — areas where a GRC platform adds significant value.

By integrating governance, risk, and compliance management into one system, companies can make informed decisions, reduce human error, and ensure consistent adherence to evolving regulations.

 GRC Platform Market Insights and Key Segments

The GRC platform market is segmented based on deployment model, solution, component, end-user, and industry vertical.

  • Deployment Model: The on-premises deployment model dominates the market due to enhanced security and customization options. It is preferred by organizations handling sensitive data or operating under strict regulatory environments.

  • Solution Type: Compliance management holds the largest market share as businesses prioritize automation of documentation, tracking, and reporting to stay audit-ready.

  • Component: Software solutions lead the market by offering analytics, policy management, and workflow automation to streamline risk processes.

  • End User: Medium enterprises represent the largest segment, focusing on scalable solutions that balance security and efficiency.

  • Industry Vertical: The BFSI sector remains a key adopter due to its complex regulatory landscape and high data security requirements.

Key Drivers of the GRC Platform Market

Several factors contribute to the rapid expansion of the GRC platform market:

  1. Escalating Cyber Risks: As cyber incidents become more frequent and sophisticated, organizations seek to integrate cybersecurity measures within GRC frameworks. These integrations improve detection, response, and recovery capabilities.

  2. Evolving Compliance Standards: Increasing regulatory pressure drives adoption of GRC solutions to ensure businesses stay aligned with global standards like GDPR, HIPAA, and ISO 27001.

  3. Automation and Efficiency: Advanced GRC software reduces manual reporting and enhances accuracy, enabling faster audit responses and improved decision-making.

  4. Operational Resilience: A robust GRC system ensures business continuity by minimizing vulnerabilities and improving crisis management strategies.

Regional Outlook and Future Trends

North America currently leads the GRC platform market, supported by mature digital infrastructure and strong regulatory frameworks. Meanwhile, the Asia-Pacific region is emerging as a key growth area, driven by increased cloud adoption and a rising focus on data privacy.

In the coming years, integration with AI, analytics, and threat intelligence tools will transform how organizations approach governance and risk. The market is expected to evolve toward more predictive and adaptive compliance solutions.

Leveraging Threat Intelligence for Stronger Risk Governance

As organizations expand their digital ecosystems, threat intelligence has become a vital part of effective risk management. Platforms like Cyble help enterprises identify, monitor, and mitigate emerging cyber risks before they escalate. Integrating such intelligence-driven insights into a GRC platform strengthens visibility and helps build a proactive security posture.

For security leaders aiming to align governance with real-time intelligence, exploring a quick free demo of integrated risk and compliance tools can offer valuable perspective on enhancing organizational resilience.

Why API Security Will Drive AppSec in 2026 and Beyond 

6 November 2025 at 01:42
api, api sprawl, api security, pen testing, Salt Security, API, APIs, attacks, testing, PTaaS, API security, API, cloud, audits, testing, API security vulnerabilities testing BRc4 Akamai security pentesting ThreatX red team pentesting API APIs Penetration Testing

As LLMs, agents and Model Context Protocols (MCPs) reshape software architecture, API sprawl is creating major security blind spots. The 2025 GenAI Application Security Report reveals why continuous API discovery, testing and governance are now critical to protecting AI-driven applications from emerging semantic and prompt-based attacks.

The post Why API Security Will Drive AppSec in 2026 and Beyond  appeared first on Security Boulevard.

Using FinOps to Detect AI-Created Security Risks 

6 November 2025 at 01:28

As AI investments surge toward $1 trillion by 2027, many organizations still see zero ROI due to hidden security and cost risks. Discover how aligning FinOps with security practices helps identify AI-related vulnerabilities, control cloud costs, and build sustainable, secure AI operations.

The post Using FinOps to Detect AI-Created Security Risks  appeared first on Security Boulevard.

NSE System Audit – What is it and Who Needs It?

4 November 2025 at 01:50

System Audit is a mandatory technical and compliance assessment introduced by SEBI and implemented by the National Stock Exchange (NSE). Its primary purpose is to ensure that every trading member or broker operates secure, reliable, and compliant IT systems capable of safeguarding investors and market operations. Note that this audit isn’t a superficial formality. It […]

The post NSE System Audit – What is it and Who Needs It? appeared first on Kratikal Blogs.

The post NSE System Audit – What is it and Who Needs It? appeared first on Security Boulevard.

FCC Set to Reverse Course on Telecom Cybersecurity Mandate

31 October 2025 at 07:36

FCC, Federal Communications Commission, Cybersecurity Mandate

The Federal Communications Commission will vote next month to rescind a controversial January 2025 Declaratory Ruling that attempted to impose sweeping cybersecurity requirements on telecommunications carriers by reinterpreting a 1994 wiretapping law.

In an Order on Reconsideration circulated Thursday, the FCC concluded that the previous interpretation was both legally erroneous and ineffective at promoting cybersecurity.

The reversal marks a dramatic shift in the FCC's approach to telecommunications security, moving away from mandated requirements toward voluntary industry collaboration—particularly in response to the massive Salt Typhoon espionage campaign sponsored by China that compromised at least eight U.S. communications companies in 2024.

CALEA Reinterpretation

On January 16, 2025—just five days before a change in administration—the FCC adopted a Declaratory Ruling claiming that section 105 of the Communications Assistance for Law Enforcement Act (CALEA) "affirmatively requires telecommunications carriers to secure their networks from unlawful access to or interception of communications."

CALEA, enacted in 1994, was designed to preserve law enforcement's ability to conduct authorized electronic surveillance as telecommunications technology evolved. Section 105 specifically requires that interception of communications within a carrier's "switching premises" can only be activated with a court order and with intervention by a carrier employee.

The January ruling took this narrow provision focused on lawful wiretapping and expanded it dramatically, interpreting it as requiring carriers to prevent all unauthorized interceptions across their entire networks. The Commission stated that carriers would be "unlikely" to satisfy these obligations without adopting basic cybersecurity practices including role-based access controls, changing default passwords, requiring minimum password strength, and adopting multifactor authentication.

The ruling emphasized that "enterprise-level implementation of these basic cybersecurity hygiene practices is necessary" because vulnerabilities in any part of a network could provide attackers unauthorized access to surveillance systems. It concluded that carriers could be in breach of statutory obligations if they failed to adopt certain cybersecurity practices—even without formal rules adopted by the Commission.

Industry Pushback and Legal Questions

CTIA – The Wireless Association, NCTA – The Internet & Television Association, and USTelecom – The Broadband Association filed a petition for reconsideration on February 18, arguing that the ruling exceeded the FCC's statutory authority and misinterpreted CALEA.

The new FCC agreed with these concerns, finding three fundamental legal flaws in the January ruling:

Enforcement Authority: The Commission concluded it lacks authority to enforce its interpretation of CALEA without first adopting implementing rules through notice-and-comment rulemaking. CALEA section 108 commits enforcement authority to the courts, not the FCC. The Commission noted that when it previously wanted to enforce CALEA requirements, it codified them as rules in 2006 specifically to gain enforcement authority.

"Switching Premises" Limitation: Section 105 explicitly refers to interceptions "effected within its switching premises," but the ruling appeared to impose obligations across carriers' entire networks. The Commission found this expansion ignored clear statutory limits.

"Interception" Definition: CALEA incorporates the Wiretap Act's definition of "intercept," which courts have consistently interpreted as limited to communications intercepted contemporaneously with transmission—not stored data. The ruling's required practices target both data in transit and at rest, exceeding section 105's scope.

"It was unlawful because the FCC purported to read a statute that required telecommunications carriers to allow lawful wiretaps within a certain portion of their network as a provision that required carriers to adopt specific network management practices in every portion of their network," the new order states.

The Voluntary Approach of Provider Commitments

Rather than mandated requirements, the FCC pointed to voluntary commitments from communications providers following collaborative engagement throughout 2025. In an October 16 ex parte filing, industry associations detailed "extensive, urgent, and coordinated efforts to mitigate operational risks, protect consumers, and preserve national security interests.

These voluntary measures include:

  • Accelerated patching cycles for outdated or vulnerable equipment
  • Updated and reviewed access controls
  • Disabled unnecessary outbound connections to limit lateral network movement
  • Improved threat-hunting efforts
  • Increased cybersecurity information sharing with federal government and within the communications sector
  • Establishment of the Communications Cybersecurity Information Sharing and Analysis Center (C2 ISAC) for real-time threat intelligence sharing
  • New collaboration forum for Chief Information Security Officers from U.S. and Canadian providers

The government-industry partnership model of collaboration has enabled communications providers to respond swiftly and agilely to Salt Typhoon, reduce vulnerabilities exposed by the attack, and bolster network cyber defenses," the industry associations stated.

Salt Typhoon Context

The Salt Typhoon attacks, disclosed in September 2024, involved a PRC-sponsored advanced persistent threat group infiltrating U.S. communications companies as part of a massive espionage campaign affecting dozens of countries. Critically, the attacks exploited publicly known common vulnerabilities and exposures (CVEs) rather than zero-day vulnerabilities—meaning they targeted avoidable weaknesses rather than previously unknown flaws.

The FCC noted that following its engagement with carriers after Salt Typhoon, providers agreed to implement additional cybersecurity controls representing "a significant change in cybersecurity practices compared to the measures in place in January."

Also read: Salt Typhoon Cyberattack: FBI Investigates PRC-linked Breach of US Telecoms

Targeted Regulatory Actions Continue

While rescinding the broad CALEA interpretation, the FCC emphasized it continues pursuing targeted cybersecurity regulations in specific areas where it has clear legal authority:

  • Rules requiring submarine cable licensees to create and implement cybersecurity risk management plans
  • Rules ensuring test labs and certification bodies in the equipment authorization program aren't controlled by foreign adversaries
  • Investigations of Chinese Communist Party-aligned businesses whose equipment appears on the FCC's Covered List
  • Proceedings to revoke authorizations for entities like HKT (International) Limited over national security concerns

"The Commission is leveraging the full range of the Commission's regulatory, investigatory, and enforcement authorities to protect Americans and American companies from foreign adversaries," the order states, while maintaining that collaboration with carriers coupled with targeted, legally robust regulatory and enforcement measures, has proven successful.

The FCC also set to withdraw the Notice of Proposed Rulemaking that accompanied the January Declaratory Ruling, which would have proposed specific cybersecurity requirements for a broad array of service providers. The NPRM was never published in the Federal Register, so the public comment period never commenced.

The Commission's new approach reflects a bet that voluntary industry cooperation, supported by targeted regulations in specific high-risk areas, will likely prove more effective than sweeping mandates of questionable legal foundation.

The Cyber Insurance Crunch: Turning Rising Premiums Into Security Wins 

27 October 2025 at 06:23
cyber insurers, CaaS, insurance, AI-related, security, insurance, cybersecurity, cyber insurance, cybersecurity, insurance

Cyber insurance is no longer just a safety net; it’s a catalyst for change. With premiums climbing and coverage shrinking, insurers are forcing organizations to modernize security operations, embrace AI-driven risk quantification, and tighten governance. Here’s how forward-looking leaders are turning insurance pain into long-term resilience. 

The post The Cyber Insurance Crunch: Turning Rising Premiums Into Security Wins  appeared first on Security Boulevard.

❌