Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

2024 Is The Year of Elections… And Disinformation

By: Editorial
27 April 2024 at 09:27

Elections

By Roman Faithfull, Cyber Intelligence Lead, Cyjax 2024 will see more elections than any other year in history: the UK, the US, Russia, India, Taiwan and more. According to AP, at least 40 countries will go to the polls this year, and some of these contests will have ramifications way beyond their national borders. This will also make 2024 a year of misinformation, as groups both within and outside these countries look to exert their influence on the democratic process. As the US presidential election draws near, specialists caution that a combination of factors domestically and internationally, across conventional and digital media platforms, and amidst a backdrop of increasing authoritarianism, profound mistrust, and political and social turbulence, heightens the severity of the threats posed by propaganda, disinformation, and conspiracy theories. There are two terms that are frequently conflated. Disinformation is deliberately false content crafted to inflict harm, whereas misinformation is inaccurate or deceptive content shared by individuals who genuinely believe it to be true. It can be difficult to establish if people are acting in good faith or not, so the terms are often used interchangeably—and misinformation often starts out as carefully crafted disinformation. The overall outlook appears bleak, with governments already experiencing the effects of misinformation. The groundwork has been laid, evidenced by past initiatives that aimed to influence elections in favor of certain parties. In 2022, the BBC launched an investigative project, creating fake accounts to follow the spread of misinformation on platforms such as Facebook, Twitter, and TikTok, and its potential political impact. Despite attempts by social media platforms to tackle this problem, it was found that false information, particularly from far-right viewpoints, remains prevalent. Today, just two years on, the techniques and tools to manipulate information are even more advanced.

The Deceptive Side of Tech

AI is dominating every discussion of technology right now, as its uses are explored for good and ill. Spreading fake news and disinformation is one of those uses. In its 2024 Global Risks report, the World Economic Forum noted that the increasing worry regarding misinformation and disinformation primarily stems from the fear that AI, wielded by malicious individuals, could flood worldwide information networks with deceptive stories. And last year, the UK’s Cyber Security Center released a report exploring the potential for nations like China and Russia to employ AI for voter manipulation and meddling in electoral processes. Deepfakes have grabbed a lot of attention, but could they disrupt future elections? It’s not a future problem—we’re already here. Deepfake audio recordings mimicking Keir Starmer, the leader of the Labour Party, and Sadiq Khan, the mayor of London, have surfaced online. The latter of these was designed to inflame tensions ahead of a day of protest in London. One of those responsible for sharing the clip apologized but added that they believed the mayor held beliefs similar to the fake audio. Even when proven false, deepfakes can remain effective in getting their message across. Many would argue that the responsibility now falls on governments to implement measures ensuring the integrity of elections. It's a cat and mouse game—and unfortunately, the cat is not exactly known for its swiftness. There are myriad ways to exploit technology for electoral manipulation, and stopping all of it could simply be impossible. Regulation is out-of-date (the Computer Misuse Act was passed in 1990, though it has been updated a few times) and the wheels of government turn slowly. Creating and passing new laws is a long process involving consultation, amendment processes, and more. But is it solely the responsibility of governments, or do others need to step up?.

Is There a Solution?

Combating technology with technology is essential, there is simply too much misinformation out there for people to sift through. Some of the biggest tech companies are taking steps: Two weeks ago, a coalition of 20 tech firms including Microsoft, Meta, Google, Amazon, IBM, Adobe and chip designer Arm announced a collective pledge to tackle AI-generated disinformation during this year's elections, with a focus on combating deepfakes. Is this reassuring? It’s good to know that big tech firms have this problem on their radar, but tough to know how effective their efforts can be. Right now, they are just agreeing on technical standards and detection mechanisms—starting the work of detecting deepfakes is some way away. Also, while deepfakes are perhaps uniquely disturbing, they are just one method among many, they represent just a fraction of effective disinformation strategies. Sophistication is not always needed for fake news to spread—rumors can be spread on social media or apps like Telegraph, real photos can be put into new contexts and spread disinformation without clever editing, and even video game footage has been used to make claims about ongoing wars.

Fighting Misinformation During Election

Fighting against misinformation is extremely difficult, but it is possible. And the coalition of 20 big tech firms has the right idea—collaboration is vital.

Be proactive

A lie can travel halfway around the world while the truth is putting on its shoes, said… someone (it’s a quote attributed to many different people). By the time we react to disinformation, it’s already out there and debunking efforts are not always effective. As Brandolini’s Law states, the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it. And often, when people read both the misinformation and the debunking, they only remember the lies. Warning people about what to look for in misinformation can help. Where did it originate? If it claims to be from an authoritative source, can you find the original? Is there a source at all?

Inoculate

Sander van der Linden, a professor of psychology and an expert on misinformation, recommends a similar approach to vaccinations—a weak dose of fake news to head off the incoming virus. By getting people to think about misinformation and evaluate it, and teaching people the tactics behind its creation, they can better deal with fake news stories they later encounter. Could we create a vaccine program for fake news? Perhaps, but it requires a big effort and a lot of collaboration between different groups.

Monitor

It’s not only governments and public figures that are attacked by fake news, corporations and businesses can find themselves the target or unwitting bystanders. Telecom companies have been the subject of 5G conspiracy theories, and pharmaceutical companies accused of being part of, rather than helping solve, the pandemic. But the problem can get weirder. A pizza restaurant in Washington DC and a furniture retailer have both had to react to being accused of child trafficking thanks to bizarre rumors circulating online. What are people saying about your business? Can you react before things get out of hand? Misinformation works for a number of reasons—people want to know “the story behind the story”, and it gives people a feeling of control when they have access to “facts” others do not—which is why misinformation spreads so fast during a pandemic that took away that feeling of control from so many of us. Those spreading misinformation know how to tap into these fears. In cybersecurity terms, they know the vulnerabilities and how to exploit them. We can’t distribute software patches to stop these attacks, but we can make them less effective by understanding them. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 

Empowering Rapid Attack Path Analysis with Generative AI

By: Editorial
21 April 2024 at 05:45

Cybersecurity

By Nathan Wenzler, Chief Security Strategist, Tenable India is ranked third globally among nations facing the most severe cyber threats, as per the World Economic Forum. However, despite this alarming statistic, there exists a significant disparity between the escalating volume of threats and the resources allocated to combat them. The cybersecurity sector is grappling with a colossal skills deficit, with a shortage of 4 million professionals worldwide. Even seasoned cybersecurity experts find it daunting to navigate and decipher the increasingly intricate landscape of modern cyber threats across the ever-widening attack surface due to limited resources.

Role of Generative AI in Enhancing Cybersecurity Strategy

In response to this challenge, organizations are turning towards generative AI to bridge the expertise gap and enhance their resilience against risks. A survey reveals that 44% of IT and cyber leaders express high levels of confidence in the capacity of generative AI to enhance their organization’s cybersecurity strategy. Security teams are increasingly consumed by the arduous task of scrutinizing various attack vectors in their systems and analyzing the tactics, techniques, and procedures employed by potential threat actors. Often, they find themselves reacting to cyberattacks post-incident, rather than proactively thwarting them—a strategy far from ideal for robust cybersecurity. Organizations in India must shift towards a proactive stance, actively pursuing and understanding threats to establish a robust line of defense. The expanding attack surface, coupled with the rapid adoption of cloud services, virtualization platforms, microservices, applications, and code libraries has added immense complexity to the security landscape. Organizations now must contend with vulnerabilities, cloud misconfigurations, and risks associated with identity access, groups, and permissions. Conventional attack path analysis tools offer insights into threat actor entry points, which assets are key targets, and what threats may exist but this can demand painstaking manual effort to decipher implications step-by-step. While attackers require just one entry point to infiltrate and laterally move within a system, defenders face the formidable task of analyzing the entire threat landscape all at once, identifying all potential attack paths, and implementing security measures in the places that can mitigate the most risk, especially when operating with limited staff.

Empowering Security Teams with Generative AI

Generative AI emerges as a potent solution to these challenges, empowering security teams by providing them with the perspective of attackers to map out potential threats and prioritize mitigation strategies based on criticality. By consolidating data from disparate sources, generative AI offers an easier way to understand the complexity of the attack surface, enabling organizations to more quickly assess exposures, prioritize actions, and visualize relationships across the entire attack surface. This means security teams can make risk decisions more quickly, leaving less time for an attacker to take advantage of an exposed asset and begin their assault on the organization. Generative AI-powered attack path analysis amalgamates and distills insights from vulnerability management, cloud security, web application, and identity exposures, enabling organizations to comprehend their risk from the perspective of an attacker. This facilitates informed and targeted cyber defense strategies, allowing organizations to anticipate threats and fortify their defenses accordingly. Through succinct summaries and mitigation guidelines, generative AI equips security teams with a quicker and more efficient view of actionable insights, sparing them the tedious task of manually researching what the threats are and what the correct security controls should be, whether that’s identifying specific patches or version numbers or understanding how to correct unauthorized user access. Even team members with varying levels of expertise can draw actionable conclusions from generative AI, simplifying complex cyberattack paths and enabling effective threat mitigation. In summary, generative AI supports a more comprehensive and proactive approach to cybersecurity, empowering organizations to understand and address potential threats quickly. By breaking free from the constraints of siloed security data, organizations can develop strategies to predict, prevent, and mitigate cyber risks effectively and faster than ever before. Disclaimer: The views and opinions expressed in this guest post are solely those of the author(s) and do not necessarily reflect the official policy or position of The Cyber Express. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything. 
❌
❌