Normal view

Received before yesterday

AI Security Is Top Cyber Concern: World Economic Forum

14 January 2026 at 15:43

AI Security Is Top Cyber Concern: World Economic Forum

AI is expected to be “the most significant driver of change in cybersecurity” this year, according to the World Economic Forum’s annual cybersecurity outlook. That was the view of 94% of the more than 800 cybersecurity leaders surveyed by the organization for its Global Cybersecurity Outlook 2026 report published this week. The report, a collaboration with Accenture, also looked at other cybersecurity concerns such as geopolitical risk and preparedness, but AI security issues are what’s most on the minds of CEOs, CISOs and other top security leaders, according to the report. One interesting data point in the report is a divergence between CEOs and CISOs. Cyber-enabled fraud is now the top concern of CEOs, who have moved their focus from ransomware to “emerging risks such as cyber-enabled fraud and AI vulnerabilities.” CISOs, on the other hand, are more concerned about ransomware and supply chain resilience, more in line with the forum’s 2025 report. “This reflects how cybersecurity priorities diverge between the boardroom and the front line,” the report said.

Top AI Security Concerns

C-level leaders are also concerned about AI-related vulnerabilities, which were identified as the fastest-growing cyber risk by 87% of respondents (chart below). Cyber-enabled fraud and phishing, supply chain disruption, exploitation of software vulnerabilities and ransomware attacks were also cited as growing risks by more than half of survey respondents, while insider threats and denial of service (DoS) attacks were seen as growing concerns by about 30% of respondents. [caption id="attachment_108654" align="aligncenter" width="1041"]AI security risks Growing cybersecurity risks (World Economic Forum)[/caption] The top generative AI (GenAI) concerns include data leaks exposing personal data, advancement of adversarial capabilities (phishing, malware development and deepfakes, for example), the technical security of the AI systems themselves, and increasingly complex security governance (chart below). [caption id="attachment_108655" align="aligncenter" width="1038"]GenAI security concerns GenAI security concerns[/caption]

Concern About AI Security Leads to Action

The increasing focus on AI security is leading to action within organizations, as the percentage of respondents assessing the security of AI tools grew from 37% in 2025 to 64% in 2026. That is helping to close “a significant gap between the widespread recognition of AI-driven risks and the rapid adoption of AI technologies without adequate safeguards,” the report said, as more organizations are introducing structured processes and governance models to more securely manage AI. About 40% of organizations conduct periodic reviews of their AI tools before deploying them, while 24% do a one-time assessment, and 36% report no assessment or no knowledge of one. The report called that “a clear sign of progress towards continuous assurance,” but noted that “roughly one-third still lack any process to validate AI security before deployment, leaving systemic exposures even as the race to adopt AI in cyber defences accelerates.” The forum report recommended protecting data used in the training and customization of AI models from breaches and unauthorized access, developing AI systems with security as a core principle, incorporating regular updates and patches, and deploying “robust authentication and encryption protocols to ensure the protection of customer interactions and data.”

AI Adoption in Security Operations

The report noted the impact of AI on defensive cybersecurity tools and operations. “AI is fundamentally transforming security operations – accelerating detection, triage and response while automating labour-intensive tasks such as log analysis and compliance reporting,” the report said. “AI’s ability to process vast datasets and identify patterns at speed positions it as a competitive advantage for organizations seeking to stay ahead of increasingly sophisticated cyberthreats.” The survey found that 77% of organizations have adopted AI for cybersecurity purposes, primarily to enhance phishing detection (52%), intrusion and anomaly response (46%), and user-behavior analytics (40%). Still, the report noted a need for greater knowledge and skills in deploying AI for cybersecurity, a need for human oversight, and uncertainty about risk as the biggest obstacles facing AI adoption in cybersecurity. “These findings indicate that trust is still a barrier to widespread AI adoption,” the report said. Human oversight remains an important part of security operations even among those organizations that have incorporated AI into their processes. “While AI excels at automating repetitive, high-volume tasks, its current limitations in contextual judgement and strategic decision-making remain clear,” the report said. “Over-reliance on ungoverned automation risks creating blind spots that adversaries may exploit.” Adoption of AI cybersecurity tools varies by industry, the report found. The energy sector prioritizes intrusion and anomaly detection, according to 69% of respondents who have implemented AI for cybersecurity. The materials and infrastructure sector emphasizes phishing protection (80%); and the manufacturing, supply chain and transportation sector is focused on automated security operations (59%).

Geopolitical Cyber Threats

Geopolitics was the top factor influencing overall cyber risk mitigation strategies, with 64% of organizations accounting for geopolitically motivated cyberattacks such as disruption of critical infrastructure or espionage. The report also noted that “confidence in national cyber preparedness continues to erode” in the face of geopolitical threats, with 31% of survey respondents “reporting low confidence in their nation’s ability to respond to major cyber incidents,” up from 26% in the 2025 report. Respondents from the Middle East and North Africa express confidence in their country’s ability to protect critical infrastructure (84%), while confidence is lower among respondents in Latin America and the Caribbean (13%). “Recent incidents affecting key infrastructure, such as airports and hydroelectric facilities, continue to call attention to these concerns,” the report said. “Despite its central role in safeguarding critical infrastructure, the public sector reports markedly lower confidence in national preparedness.” And 23% of public-sector organizations said they lack sufficient cyber-resilience capabilities, the report found.  

Australia Establishes AI Safety Institute to Combat Emerging Threats from Frontier AI Systems

2 December 2025 at 11:38

APT31, Australian Parliament, AI Safety Institute, National AI Plan

Australia's fragmented approach to AI oversight—with responsibilities scattered across privacy commissioners, consumer watchdogs, online safety regulators, and sector-specific agencies—required coordination to keep pace with rapidly evolving AI capabilities and their potential to amplify existing harms while creating entirely new threats.

The Australian Government announced establishment of the AI Safety Institute backed by $29.9 million in funding, to monitor emerging AI capabilities, test advanced systems, and share intelligence across government while supporting regulators to ensure AI companies comply with Australian law. The setting up of the AI safety institute is part of the larger National AI Plan that the Australian government officially released on Tuesday.

The Institute will become operational in early 2026 as the centerpiece of the government's strategy to keep Australians safe while capturing economic opportunities from AI adoption. The approach maintains existing legal frameworks as the foundation for addressing AI-related risks rather than introducing standalone AI legislation, with the Institute supporting portfolio agencies and regulators to adapt laws when necessary.

Dual Focus on Upstream Risks and Downstream Harms

The AI Safety Institute will focus on both upstream AI risks and downstream AI harms. Upstream risks involve model capabilities and the ways AI systems are built and trained that can create or amplify harm, requiring technical evaluation of frontier AI systems before deployment.

Downstream harms represent real-world effects people experience when AI systems are used, including bias in hiring algorithms, privacy breaches from data processing, discriminatory outcomes in automated decision-making, and emerging threats like AI-enabled crime and AI-facilitated abuse disproportionately impacting women and girls.

The Institute will generate and share technical insights on emerging AI capabilities, working across government and with international partners. It will develop advice, support bilateral and multilateral safety engagement, and publish safety research to inform industry and academia while engaging with unions, business, and researchers to ensure functions meet community needs.

Supporting Coordinated Regulatory Response

The Institute will support coordinated responses to downstream AI harms by engaging with portfolio agencies and regulators, monitoring and analyzing information across government to allow ministers and regulators to take informed, timely, and cohesive regulatory action.

Portfolio agencies and regulators remain best placed to assess AI uses and harms in specific sectors and adjust regulatory approaches when necessary. The Institute will support existing regulators to ensure AI companies are compliant with Australian law and uphold legal standards of fairness and transparency.

The government emphasized that Australia has strong existing, largely technology-neutral legal frameworks including sector-specific guidance and standards that can apply to AI. The approach promotes flexibility, uses regulators' existing expertise, and targets emerging threats as understanding of AI's strengths and limitations evolves.

Addressing Specific AI Harms

The government is taking targeted action against specific harms while continuing to assess suitability of existing laws. Consumer protections under Australian Consumer Law apply equally to AI-enabled goods and services, with Treasury's review finding Australians enjoy the same strong protections for AI products as traditional goods.

The government addresses AI-related risks through enforceable industry codes under the Online Safety Act 2021, criminalizing non-consensual deepfake material while considering further restrictions on "nudify" apps and reforms to tackle algorithmic bias.

The Attorney-General's Department engages stakeholders through the Copyright and AI Reference Group to consult on possible updates to copyright laws as they relate to AI, with the government ruling out a text and data mining exception to provide certainty to Australian creators and media workers.

Healthcare AI regulation is under review through the Safe and Responsible AI in Healthcare Legislation and Regulation Review, while the Therapeutic Goods Administration oversees AI used in medical device software following its review on strengthening regulation of medical device software including artificial intelligence.

Also read: CPA Australia Warns: AI Adoption Accelerates Cyber Risks for Australian Businesses

National Security and Crisis Response

The Department of Home Affairs, National Intelligence Community, and law enforcement agencies continue efforts to proactively mitigate serious risks posed by AI. Home Affairs coordinates cross-government efforts on cybersecurity and critical infrastructure protection while overseeing the Protective Security Policy Framework detailing policy requirements for authorizing AI technology systems for non-corporate Commonwealth entities.

AI is likely to exacerbate existing national security risks and create new, unknown threats. The government is preparing for potential AI-related incidents through the Australian Government Crisis Management Framework, which provides overarching policy for managing potential crises.

The government will consider how AI-related harms are managed under the framework to ensure ongoing clarity regarding roles and responsibilities across government to support coordinated and effective action.

International Engagement

The Institute will collaborate with domestic and international partners including the National AI Centre and the International Network of AI Safety Institutes to support global conversations on understanding and addressing AI risks.

Australia is a signatory to the Bletchley Declaration, Seoul Declaration, and Paris Statement emphasizing inclusive international cooperation on AI governance. Participation in the UN Global Digital Compact, Hiroshima AI Process, and Global Partnership on AI supports conversations on advancing safe, secure, and trustworthy adoption.

The government is developing an Australian Government Strategy for International Engagement and Regional Leadership on Artificial Intelligence to align foreign and domestic policy settings while establishing priorities for bilateral partnerships and engagement in international forums.

Also read: UK’s AI Safety Institute Establishes San Francisco Office for Global Expansion
❌