Veza has added a platform to its portfolio that is specifically designed to secure and govern artificial intelligence (AI) agents that might soon be strewn across the enterprise. Currently in the process of being acquired by ServiceNow, the platform is based on an Access Graph the company previously developed to provide cybersecurity teams with a..
How Do Non-Human Identities Transform Cloud Security Management? Could your cloud security management strategy be missing a vital component? With cybersecurity evolves, the focus has expanded beyond traditional human operatives to encompass Non-Human Identities (NHIs). Understanding NHIs and their role in modern cloud environments is crucial for industries ranging from financial services to healthcare. This […]
Is Your Organization Prepared for the Evolving Landscape of Non-Human Identities? Managing non-human identities (NHIs) has become a critical focal point for organizations, especially for those using cloud-based platforms. But how can businesses ensure they are adequately protected against the evolving threats targeting machine identities? The answer lies in adopting a strategic and comprehensive approach […]
Are You Overlooking the Security of Non-Human Identities in Your Cybersecurity Framework? Where bustling with technological advancements, the security focus often zooms in on human authentication and protection, leaving the non-human counterparts—Non-Human Identities (NHIs)—in the shadows. The integration of NHIs in data security strategies is not just an added layer of protection but a necessity. […]
Why Are Non-Human Identities Essential for Secure Cloud Environments? Organizations face a unique but critical challenge: securing non-human identities (NHIs) and their secrets within cloud environments. But why are NHIs increasingly pivotal for cloud security strategies? Understanding Non-Human Identities and Their Role in Cloud Security To comprehend the significance of NHIs, we must first explore […]
What Role Do Non-Human Identities Play in Securing Our Digital Ecosystems? Where more organizations migrate to the cloud, the concept of securing Non-Human Identities (NHIs) is becoming increasingly crucial. NHIs, essentially machine identities, are pivotal in maintaining robust cybersecurity frameworks. They are a unique combination of encrypted passwords, tokens, or keys, which are akin to […]
What Makes AI-Driven Security Solutions Crucial in Modern Cloud Environments? How can organizations navigate the complexities of cybersecurity to ensure robust protection, particularly when dealing with Non-Human Identities (NHIs) in cloud environments? The answer lies in leveraging AI-driven security solutions, offering remarkable freedom of choice and adaptability for cybersecurity professionals. Understanding Non-Human Identities: The Backbone […]
How Are Non-Human Identities Transforming Cybersecurity Practices? Are you aware of the increasing importance of Non-Human Identities (NHIs)? Where organizations transition towards more automated and cloud-based environments, managing NHIs and secrets security becomes vital. These machine identities serve as the backbone for securing sensitive operations across industries like financial services, healthcare, and DevOps environments. Understanding […]
Are Non-Human Identities the Key to Strengthening Agentic AI Security? Where increasingly dominated by Agentic AI, organizations are pivoting toward more advanced security paradigms to protect their digital. Non-Human Identities (NHI) and Secrets Security Management have emerged with pivotal elements to fortify this quest for heightened cybersecurity. But why should this trend be generating excitement […]
With the popularity of AI coding tools rising among some software developers, their adoption has begun to touch every aspect of the process, including human developers using the tools to improve existing AI coding tools. We’re not talking about runaway self-improvement here; just people using tools to improve the tools themselves.
In interviews with Ars Technica this week, OpenAI employees revealed the extent to which the company now relies on its own AI coding agent, Codex, to build and improve the development tool. “I think the vast majority of Codex is built by Codex, so it’s almost entirely just being used to improve itself,” said Alexander Embiricos, product lead for Codex at OpenAI, in a conversation on Tuesday. Codex now generates most of the code that OpenAI developers use to improve the tool itself.
Codex, which OpenAI launched in its modern incarnation as a research preview in May 2025, operates as a cloud-based software engineering agent that can handle tasks like writing features, fixing bugs, and proposing pull requests. The tool runs in sandboxed environments linked to a user’s code repository and can execute multiple tasks in parallel. OpenAI offers Codex through ChatGPT’s web interface, a command-line interface (CLI), and IDE extensions for VS Code, Cursor, and Windsurf.
On Thursday, OpenAI released GPT-5.2, its newest family of AI models for ChatGPT, in three versions called Instant, Thinking, and Pro. The release follows CEO Sam Altman’s internal “code red” memo earlier this month, which directed company resources toward improving ChatGPT in response to competitive pressure from Google’s Gemini 3 AI model.
“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said during a press briefing with journalists on Thursday. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”
As with previous versions of GPT-5, the three model tiers serve different purposes: Instant handles faster tasks like writing and translation; Thinking spits out simulated reasoning “thinking” text in an attempt to tackle more complex work like coding and math; and Pro spits out even more simulated reasoning text with the goal of delivering the highest-accuracy performance for difficult problems.
Are Non-Human Identities (NHIs) the Missing Link in Your Cloud Security Strategy? Where technology is reshaping industries, the concept of Non-Human Identities (NHIs) has emerged as a critical component in cloud-native security strategies. But what exactly are NHIs, and why are they essential in achieving security assurance? Decoding Non-Human Identities in Cybersecurity The term Non-Human […]
Can Agentic AI Revolutionize Cybersecurity Practices? Where digital threats consistently challenge organizations, how can cybersecurity teams leverage innovations to bolster their defenses? Enter the concept of Agentic AI—a technology that could serve as a powerful ally in the ongoing battle against cyber threats. By enhancing the management of Non-Human Identities (NHIs) and secrets security management, […]
On Tuesday, French AI startup Mistral AI released Devstral 2, a 123 billion parameter open-weights coding model designed to work as part of an autonomous software engineering agent. The model achieves a 72.2 percent score on SWE-bench Verified, a benchmark that attempts to test whether AI systems can solve real GitHub issues, putting it among the top-performing open-weights models.
Perhaps more notably, Mistral didn’t just release an AI model, it released a new development app called Mistral Vibe. It’s a command line interface (CLI) similar to Claude Code, OpenAI Codex, and Gemini CLI that lets developers interact with the Devstral models directly in their terminal. The tool can scan file structures and Git status to maintain context across an entire project, make changes across multiple files, and execute shell commands autonomously. Mistral released the CLI under the Apache 2.0 license.
It’s always wise to take AI benchmarks with a large grain of salt, but we’ve heard from employees of the big AI companies that they pay very close attention to how well models do on SWE-bench Verified, which presents AI models with 500 real software engineering problems pulled from GitHub issues in popular Python repositories. The AI must read the issue description, navigate the codebase, and generate a working patch that passes unit tests. While some AI researchers have noted that around 90 percent of the tasks in the benchmark test relatively simple bug fixes that experienced engineers could complete in under an hour, it’s one of the few standardized ways to compare coding models.
Model Context Protocol (MCP) is quickly becoming the backbone of how AI agents interact with the outside world. It gives agents a standardized way to discover tools, trigger actions, and pull data. MCP dramatically simplifies integration work. In short, MCP servers act as the adapter that grants access to services, manages credentials and permissions, and..
Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.
AI agents are specialized implementations of AI language models designed to perform multistep tasks autonomously rather than simply responding to single prompts. So-called “agentic” features have been central to Microsoft’s 2025 sales pitch: At its Build conference in May, the company declared that it has entered “the era of AI agents.”
The company has promised customers that agents could automate complex tasks, such as generating dashboards from sales data or writing customer reports. At its Ignite conference in November, Microsoft announced new features like Word, Excel, and PowerPoint agents in Microsoft 365 Copilot, along with tools for building and deploying agents through Azure AI Foundry and Copilot Studio. But as the year draws to a close, that promise has proven harder to deliver than the company expected.
OAuth is a broadly accepted standard. It’s used all over the internet. But as the usage of LLM agents continues to expand, OAuth isn’t going to be enough. In fact, relying on OAuth will be dangerous. We won’t be able to set permissions at an appropriate granularity, giving LLMs access to far too much. More..
3min readAs AI platforms grow more complex and interdependent, small failures can cast long shadows. That’s what happened inside the open-source CrewAI platform, where a vulnerability in its error-handling logic surfaced during a provisioning failure. The resulting “exception response” – the message a service returns when it encounters an unhandled error during a request – contained […]
Alan reflects on a turbulent year in DevSecOps, highlighting the rise of AI-driven security, the maturing of hybrid work culture, the growing influence of platform engineering, and the incredible strength of the DevSecOps community — while calling out the talent crunch, tool sprawl and security theater the industry must still overcome.
Are Non-Human Identities the Key to Secure Cloud Environments? How do we ensure our systems remain secure, especially when it comes to machine identities and their secrets? The management of Non-Human Identities (NHIs) is a crucial aspect of cybersecurity, offering a comprehensive approach to protecting machine identities and their associated credentials in the cloud. Understanding […]
What Role Does Secrets Management Play in Harnessing Agentic AI? Where machines not only execute tasks but also make decisions, adapt, and evolve just like humans. This is the emerging frontier of Agentic AI, a transformative force. However, as promising as this technology is, its seamless and secure operation hinges significantly on effective Secrets Management. […]
Why Is Managing Non-Human Identities Essential in Cloud Security? Non-Human Identities (NHIs) play an instrumental role in modern cybersecurity frameworks. But what exactly constitutes an NHI, and why is its management vital in safeguarding our digital? Machine identities, known as NHIs, are the digital equivalents of human identities and are instrumental in ensuring secure interactions […]
A new vulnerability scoring system has just been announced. The initiative, called the AI Vulnerability Scoring System (AIVSS), aims to fill the gaps left by traditional models such as the Common Vulnerability Scoring System (CVSS), which were not designed to handle the complex, non-deterministic nature of modern AI technologies.AI security expert, author, and adjunct professor Ken Huang introduced the AIVSS framework, emphasizing that while CVSS has long been a cornerstone for assessing software vulnerabilities, it fails to capture the unique threat landscape presented by agentic and autonomous AI systems.“The CVSS and other regular software vulnerability frameworks are not enough,” Huang explained. “These assume traditional deterministic coding. We need to deal with the non-deterministic nature of Agentic AI.”Huang serves as co-leader of the AIVSS project working group alongside several prominent figures in cybersecurity and academia, including Zenity Co-Founder and CTO Michael Bargury, Amazon Web Services Application Security Engineer Vineeth Sai Narajala, and Stanford University Information Security Officer Bhavya Gupta. Together, the group has collaborated under the Open Worldwide Application Security Project (OWASP) to develop a framework that provides a structured and measurable approach to assessing AI-related security threats.According to Huang, Agentic AI introduces unique challenges because of its partial autonomy. “Autonomy is not itself a vulnerability, but it does elevate risk,” he noted. The AIVSS is designed specifically to quantify those additional risk factors that emerge when AI systems make independent decisions, interact dynamically with tools, or adapt their behavior in ways that traditional software cannot.
A New Approach to AI Vulnerability Scoring
The AI Vulnerability Scoring System builds upon the CVSS model, introducing new parameters tailored to the dynamic nature of AI systems. The AIVSS score begins with a base CVSS score and then incorporates an agentic capabilities assessment. This additional layer accounts for autonomy, non-determinism, and tool use, factors that can amplify risk in AI-driven systems. The combined score is then divided by two and multiplied by an environmental context factor to produce a final vulnerability score.A dedicated portal, available at aivss.owasp.org, provides documentation, structured guides for AI risk assessment, and a scoring tool for practitioners to calculate their own AI vulnerability scores.Huang highlighted a critical difference between AI systems and traditional software: the fluidity of AI identities. “We cannot assume the identities used at deployment time,” he said. “With agentic AI, you need the identity to be ephemeral and dynamically assigned. If you really want to have autonomy, you have to give it the privileges it needs to finish the task.”
Top Risks in Agentic AI Systems
The AIVSS project has also identified the ten most severe core security risks for Agentic AI, though the team has refrained from calling it an official “Top 10” list. The current risks include:
Agentic AI Tool Misuse
Agent Access Control Violation
Agent Cascading Failures
Agent Orchestration and Multi-Agent Exploitation
Agent Identity Impersonation
Agent Memory and Context Manipulation
Insecure Agent Critical Systems Interaction
Agent Supply Chain and Dependency Attacks
Agent Untraceability
Agent Goal and Instruction Manipulation
Each of these risks reflects the interconnected and compositional nature of AI systems. As the draft AIVSS document notes, “Some repetition across entries is intentional. Agentic systems are compositional and interconnected by design. To date, the most common risks such as Tool Misuse, Goal Manipulation, or Access Control Violations, often overlap or reinforce each other in cascading ways.”Huang provided an example of how this manifests in practice: “For tool misuse, there shouldn’t be a risk in selecting a tool. But in MCP systems, there is tool impersonation, and also insecure tool usage.”