Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Rapid7 Infuses Generative AI into the InsightPlatform to Supercharge SecOps and Augment MDR Services

13 June 2024 at 09:00
Rapid7 Infuses Generative AI into the InsightPlatform to Supercharge SecOps and Augment MDR Services

In the ever-evolving landscape of cybersecurity, staying ahead of threats is not just a goal—it's a necessity. At Rapid7, we are pioneering the infusion of artificial intelligence (AI) into our platform and service offerings, transforming the way security operations centers (SOCs) around the globe operate. We’ve been utilizing AI in our technologies for decades, establishing patented models to better and more efficiently solve customer challenges. Furthering this endeavor, we’re excited to announce we’ve extended the Rapid7 AI Engine to include new Generative AI capabilities being used by our internal SOC teams, transforming the way we deliver our MDR services.

A Thoughtful, Deliberate Approach to AI Model Deployment

At Rapid7, one of our core philosophical beliefs is that vendors - like ourselves - should not lean on customers to tune our models. This belief is showcased by our approach to deploying AI models, with a process that entails initially releasing them to our internal SOC teams to be trained and battle-tested before being released to customers via in-product experiences.

Another core pillar of our AI development principles is that human supervision is essential and can’t be completely removed from the process. We believe wholeheartedly in the efficacy of our models, but the reality is that AI is not immune from making mistakes. At Rapid7, we have the advantage of working in lockstep with one of the world's leading SOC teams. With a continuous feedback loop in place between our frontline analysts and our AI and data science team, we’re constantly fine-tuning our models, and MDR customers benefit from knowing our teams are validating any AI-generated output for accuracy.

Intelligent Threat Detection and Continuous Alert Triage Validation

The first line of defense in any cybersecurity strategy is the ability to detect threats accurately and efficiently. The Rapid7 AI Engine leverages the massive volume of high-fidelity risk and threat data to enhance alert triage by accurately distinguishing between malicious and benign alerts, ensuring analysts can focus on only the alerts that are truly malicious. The engine has also been extended to include a combination of both traditional machine learning (ML) and Generative AI models to ensure new security alerts are accurately labeled as malicious or benign. This work boosts the signal to noise ratio, thereby enabling Rapid7 analysts to spend more time investigating the security signals that matter to our customers.

Introducing Our AI-Powered SOC Assistant

Generative AI is not just a tool; it's a game-changer for SOC efficiency. Our AI-native SOC assistant empowers MDR analysts to quickly respond to security threats and proactively mitigate risks on behalf of our customers. Because we fundamentally believe AI should be trained by the knowledge of our teams and vetted processes, our SOC assistant utilizes our vast internal knowledge bases. Sources like the Rapid7 MDR Handbook - a resource amassed over decades of experience cultivated by our elite SOC team - enable the assistant to guide analysts through complex investigations and streamline response workflows, keeping our analysts a step ahead.

Rapid7 is further using generative AI to carefully automate the drafting of security reports for SOC analysts, typically a manual and time-intensive process. With more than 11,000 customers globally, the Rapid7 SOC triages a huge volume of activity each month, with summaries that are critical for keeping customers fully updated on what’s happening in their environment and actions performed on their behalf. While AI is a key tool to streamline report building and delivery, every report that is generated by the Rapid7 AI Engine is augmented and enhanced by our SOC teams, making certain every data point is accurate and actionable. Beyond providing expert guidance, the AI assistant also has the ability to automatically generate incident reports once investigations are closed out, streamlining the process and ensuring we can communicate updates with customers in a timely manner.

An Enabler for Secure AI/ML Application Development

We know we’re not alone in developing Generative AI solutions, and as such we’re also focused on delivering capabilities that allow our customers to implement and adhere to AI/ML development best practices. We continue to expand our support for Generative AI services from major cloud service providers (CSPs), including AWS Bedrock, Azure OpenAI service and GCP Vertex. These services can be continuously audited against best practices outlined in the Rapid7 AI/ML Security Best Practices compliance pack, which includes the mitigations outlined in the OWASP Top 10 for ML and large language models (LLMs). Our continuous auditing process, enriched by InsightCloudSec’s Layered Context, offers a comprehensive view of AI-related cloud risks, ensuring that our customers' AI-powered assets are secure.

Rapid7 Infuses Generative AI into the InsightPlatform to Supercharge SecOps and Augment MDR Services

The Future of MDR Services is Powered by AI

The integration of Generative AI into the Insight Platform is not just about helping our teams keep pace - it's about setting the pace. With unparalleled scalability and adaptability, Rapid7 is committed to maintaining a competitive edge in the market, particularly as it relates to leveraging AI to transform security operations. Our focus on operational efficiencies, cost reduction, and improved quality of service is unwavering. We're not just responding to the changing threat landscape – we're reshaping it.

The future of MDR services is here, and it's powered by the Rapid7 AI Engine.

AI Trust Risk and Security Management: Why Tackle Them Now?

15 May 2024 at 09:00
AI Trust Risk and Security Management: Why Tackle Them Now?

Co-authored by Sabeen Malik and Laura Ellis

In the evolving world of artificial intelligence (AI), keeping our customers secure and maintaining their trust is our top priority. As AI technologies integrate more deeply into our daily operations and services, they bring a set of unique challenges that demand a robust management strategy:

  1. The Black Box Dilemma: AI models pose significant challenges in terms of transparency and predictability. This opaque nature can complicate efforts to diagnose and rectify issues, making predictability and reliability hard to achieve.
  2. Model Fragility: AI's performance is closely tied to the data it processes. Over time, subtle changes in data input—known as data drift—can degrade an AI system’s accuracy, necessitating constant monitoring and adjustments.
  3. Easy Access, Big Responsibility: The democratization of AI through cloud services means that powerful AI tools are just a few clicks away for developers. This ease of access underscores the need for rigorous security measures to prevent misuse and effectively manage vulnerabilities.
  4. Staying Ahead of the Curve: With AI regulation still in its formative stages, proactive development of self-regulatory frameworks like ours helps inform our future AI regulatory compliance frameworks; but most importantly, it builds trust among our customers. When thinking about AI’s promises and challenges, we know that trust is earned. But that trust is also is of concern for global policymakers, and that is why we are looking forward to engaging with NIST on discussions related to the AI Risk Management, Cyber Security, and Privacy frameworks. It’s also why we were an inaugural signer of the CISA Secure by Design Pledge to demonstrate to government stakeholders and customers our commitment to building things and understanding the stakes at large.

Our TRiSM (Trust, Risk, and Security Management) framework isn’t merely a component of our operations—it’s a foundational strategy that guides us in navigating the intricate landscape of AI with confidence and security.

How We Approach AI Security at Rapid7

Rapid7 leverages the best available technology to protect our customers' attack surfaces. Our mission drives us to keep abreast of the latest AI advancements to deliver optimal value to customers while effectively managing the inherent risks of the technology.

Innovation and scientific excellence are key aspects of our AI strategy. We strive for continuous improvement, leveraging the latest technological innovations and scientific research. By engaging with thought leaders and adopting best practices, we aim to stay at the forefront of AI technology, ensuring our solutions are not only effective but also pioneering and thoughtful.

Our AI principles center on transparency, fairness, safety, security, privacy, and accountability. These principles are not just guidelines; they are integral to how we build, deploy, and manage our AI systems. Accountability is a cornerstone of our strategy, and we hold ourselves responsible for the proper functioning of our AI systems so we can ensure they respect and embody our principles throughout their lifecycle. This includes ongoing oversight, regular audits, and adjustments as needed based on feedback and evolving standards.

We have leveraged a number of AI risk management frameworks to inform our approach.  Most notably, we have adopted the NIST AI Risk Management Framework and the Open Standard for Responsible AI. These frameworks help us comprehensively assess and manage AI risks, from the early stages of development through deployment and ongoing use. The NIST framework provides a thorough methodology for lifecycle risk management, while the Open Standard offers practical tools for evaluation and ensures that our AI systems are user-centric and responsible.

We are committed to ensuring that our AI deployments are not only technologically advanced but also adhere to the highest standards of security and ethical responsibility.

AI Integration in Action: Making It Work Day-to-Day

We take a practical approach to adhere to our AI TRiSM framework by integrating it into the daily operations of our existing technologies and processes, ensuring that AI enhances rather than complicates our security posture:

  1. Clear Rules: We have developed and implemented detailed enterprise-wide policies and operational procedures that govern the deployment and use of AI technologies. These guidelines ensure consistency and compliance across all departments and initiatives.
  2. Transparency Matters: We leverage our own tooling to gain visibility into our cloud security posture for AI.  We leverage InsightCloudSec solutions to provide comprehensive visibility into our AI deployments across various environments. This visibility is crucial for our security strategy, encapsulated by the philosophy, "You can’t protect what you can’t see." It allows us to monitor, evaluate, and adjust our AI resources proactively.
  3. Throughout the Development Lifecycle: We integrate rigorous AI evaluations at every phase of our software development lifecycle. From the initial development stages to production and through regular post-deployment assessments, our framework ensures that AI systems are safe, effective, and aligned with our ethical standards.
  4. Smart Governance: By embedding AI-specific governance protocols into our existing code and cloud configuration management systems, we maintain strict control over all AI-related activities. This integration ensures that our AI initiatives comply with established best practices and regulatory requirements.
  5. Empowering Our Team: We recognize the critical need for advanced AI skills in today’s tech landscape. To address this, we offer training programs and collaborative opportunities, which not only foster innovation but also ensure adherence to best practices. This approach empowers our teams to innovate confidently within a secure and supportive environment.

Integrating AI into our core processes enhances our operational security and underscores our commitment to ethical innovation. At Rapid7, we are dedicated to leading responsibly in the AI space, ensuring that our technological advancements positively contribute to our customers, company, and society.

Our AI TRiSM framework is not merely a set of policies—it's a proactive, strategic approach to securely and ethically harnessing new technologies. As we continue to innovate and push the boundaries of what’s possible with AI, we stay focused on setting a high bar for standards of responsible and secure AI usage, ensuring that our customers always receive the best technology solutions. Learn more here.

The 2024 Attack Intelligence Report by Rapid7 Labs

Get a 14-month look at attacker behaviors and vulnerabilities.

READ THE REPORT
❌
❌