โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Empowering Rapid Attack Path Analysis with Generative AI

By: Editorial
21 April 2024 at 05:45

Cybersecurity

By Nathan Wenzler, Chief Security Strategist, Tenable India is ranked third globally among nations facing the most severe cyber threats, as per the World Economic Forum. However, despite this alarming statistic, there exists a significant disparity between the escalating volume of threats and the resources allocated to combat them. The cybersecurity sector is grappling with a colossal skills deficit, with a shortage of 4 million professionals worldwide. Even seasoned cybersecurity experts find it daunting to navigate and decipher the increasingly intricate landscape of modern cyber threats across the ever-widening attack surface due to limited resources.

Role of Generative AI in Enhancing Cybersecurity Strategy

In response to this challenge, organizations are turning towards generative AI to bridge the expertise gap and enhance their resilience against risks. A survey reveals that 44% of IT and cyber leaders express high levels of confidence in the capacity of generative AI to enhance their organizationโ€™s cybersecurity strategy. Security teams are increasingly consumed by the arduous task of scrutinizing various attack vectors in their systems and analyzing the tactics, techniques, and procedures employed by potential threat actors. Often, they find themselves reacting to cyberattacks post-incident, rather than proactively thwarting themโ€”a strategy far from ideal for robust cybersecurity. Organizations in India must shift towards a proactive stance, actively pursuing and understanding threats to establish a robust line of defense. The expanding attack surface, coupled with the rapid adoption of cloud services, virtualization platforms, microservices, applications, and code libraries has added immense complexity to the security landscape. Organizations now must contend with vulnerabilities, cloud misconfigurations, and risks associated with identity access, groups, and permissions. Conventional attack path analysis tools offer insights into threat actor entry points, which assets are key targets, and what threats may exist but this can demand painstaking manual effort to decipher implications step-by-step. While attackers require just one entry point to infiltrate and laterally move within a system, defenders face the formidable task of analyzing the entire threat landscape all at once, identifying all potential attack paths, and implementing security measures in the places that can mitigate the most risk, especially when operating with limited staff.

Empowering Security Teams with Generative AI

Generative AI emerges as a potent solution to these challenges, empowering security teams by providing them with the perspective of attackers to map out potential threats and prioritize mitigation strategies based on criticality. By consolidating data from disparate sources, generative AI offers an easier way to understand the complexity of the attack surface, enabling organizations to more quickly assess exposures, prioritize actions, and visualize relationships across the entire attack surface. This means security teams can make risk decisions more quickly, leaving less time for an attacker to take advantage of an exposed asset and begin their assault on the organization. Generative AI-powered attack path analysis amalgamates and distills insights from vulnerability management, cloud security, web application, and identity exposures, enabling organizations to comprehend their risk from the perspective of an attacker. This facilitates informed and targeted cyber defense strategies, allowing organizations to anticipate threats and fortify their defenses accordingly. Through succinct summaries and mitigation guidelines, generative AI equips security teams with a quicker and more efficient view of actionable insights, sparing them the tedious task of manually researching what the threats are and what the correct security controls should be, whether thatโ€™s identifying specific patches or version numbers or understanding how to correct unauthorized user access. Even team members with varying levels of expertise can draw actionable conclusions from generative AI, simplifying complex cyberattack paths and enabling effective threat mitigation. In summary, generative AI supports a more comprehensive and proactive approach to cybersecurity, empowering organizations to understand and address potential threats quickly. By breaking free from the constraints of siloed security data, organizations can develop strategies to predict, prevent, and mitigate cyber risks effectively and faster than ever before. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.ย 

OpenAIโ€™s flawed plan to flag deepfakes ahead of 2024 elections

7 May 2024 at 18:19
OpenAIโ€™s flawed plan to flag deepfakes ahead of 2024 elections

Enlarge (credit: Boris Zhitkov | Moment)

As the US moves toward criminalizing deepfakesโ€”deceptive AI-generated audio, images, and videos that are increasingly hard to discern from authentic content onlineโ€”tech companies have rushed to roll out tools to help everyone better detect AI content.

But efforts so far have been imperfect, and experts fear that social media platforms may not be ready to handle the ensuing AI chaos during major global elections in 2024โ€”despiteย tech giants committing to making tools specifically to combat AI-fueled election disinformation. The best AI detection remains observant humans, who, by paying close attention to deepfakes, can pick up on flaws like AI-generated people with extra fingers or AI voices that speak without pausing for a breath.

Among the splashiest tools announced this week, OpenAI shared details today about a new AI image detection classifier that it claims can detect about 98 percent of AI outputs from its own sophisticated image generator, DALL-E 3. It also "currently flags approximately 5 to 10 percent of images generated by other AI models," OpenAI's blog said.

Read 31 remaining paragraphs | Comments

Downranking wonโ€™t stop Googleโ€™s deepfake porn problem, victims say

14 May 2024 at 18:00
Downranking wonโ€™t stop Googleโ€™s deepfake porn problem, victims say

Enlarge (credit: imaginima | E+)

After backlash over Google's search engine becoming the primary traffic source for deepfake porn websites, Google has started burying these links in search results, Bloomberg reported.

Over the past year, Google has been driving millions to controversial sites distributing AI-generated pornography depicting real people in fake sex videos that were created without their consent, Similarweb found. While anyone can be targetedโ€”police already are bogged down with dealing with a flood of fake AI child sex imagesโ€”female celebrities are the most common victims. And their fake non-consensual intimate imagery is more easily discoverable on Google by searching just about any famous name with the keyword "deepfake," Bloomberg noted.

Google refers to this content as "involuntary fake" or "synthetic pornography." The search engine provides a path for victims to report that content whenever it appears in search results. And when processing these requests, Google also removes duplicates of any flagged deepfakes.

Read 20 remaining paragraphs | Comments

Robert F. Kennedy Jr. sues Meta, citing chatbotโ€™s reply as evidence of shadowban

16 May 2024 at 17:43
Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>

Enlarge / Screenshot from the documentary Who Is Bobby Kennedy? (credit: whoisbobbykennedy.com)

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy's documentary and making it harder to support Kennedy's candidacy. This allegedly has caused "substantial donation losses," while also violating the free speech rights of Kennedy, his supporters, and his film's production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms' automated spam filters.

Read 25 remaining paragraphs | Comments

Yesterday โ€” 17 May 2024Main stream

User Outcry as Slack Scrapes Customer Data for AI Model Training

17 May 2024 at 12:43

Slack reveals it has been training AI/ML models on customer data, including messages, files and usage information. It's opt-in by default.

The post User Outcry as Slack Scrapes Customer Data for AI Model Training appeared first on SecurityWeek.

โŒ
โŒ