Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Cybersecurity

Designing a More Inclusive Web: DataDome’s Response Page Accessibility Upgrades

18 June 2024 at 13:31

DataDome's commitment to accessibility extends to every facet of our business. Learn how we've updated our response pages to meet the WCAG 2.2 AA standards.

The post Designing a More Inclusive Web: DataDome’s Response Page Accessibility Upgrades appeared first on Security Boulevard.

💾

Explained: Android overlays and how they are used to trick people

18 June 2024 at 12:51

Sometimes you’ll see the term “overlays” used in articles about malware and you might wonder what they are. In this post we will try to explain what overlays—particularly on Android devices—are, and how cybercriminals deploy them.

Most of the time, overlays are used to make people think they are visiting a legitimate website or using a trusted app while in reality they are not.

Simply put, the Android overlay is a feature used by an app to appear on top of another app. The legitimate use of overlays is to offer functionality to the app’s user without them having to leave the app itself, for example for messages or alerts, such as Android bubbles on Messenger.

The possible malicious use of overlays, then, is not hard to guess. Overlays can be used to draw a full window on top of a legitimate app and, as such, intercept all the interactions the user has with the app. But they can also be superimposed over certain critical areas of an app like the text in a message box.

Some examples of malicious uses of overlays:

  • Requesting permissions under false pretenses, malicious apps can hide their requests by covering the legitimate app’s permissions text.
  • Clickjacking, where a user is tricked into clicking on actionable content thinking they are interacting with a legitimate app.
  • Intercepting information like login credentials and even some multi-factor authentication (MFA) tokens, by making the user think they are entering them on a legitimate app or website.

Whether the overlays are transparent or whether they mimic the legitimate app does not influence the way they work. As long as they blend with the original application’s interface, they are incredibly hard to spot.

Most of the time, a malicious overlay’s goal is to intercept certain user data which enables cybercriminals to steal money or cryptocurrencies. This is why many banking apps have protection in place. In modern Android versions, developers can successfully block any non-system Android overlay to protect against overlay attacks.

Protection against overlays

As we said, screen overlay attacks are most common on Android devices, and they are a significant threat, so we will explain how you can check which apps have the permission to use overlays and how you can disable it.

Tap Settings > Apps > Options (three stacked dots) > Special access > Appear on top. Here you can see a list of apps with the permission to “Appear on top” and you can disable the ones you don’t recognize or don’t need to have this permission.

Using an anti-malware solution for your Android device will be effective against known malicious apps. You can uninstall these apps using the mobile device’s uninstall functionality, but the tricky part lies in identifying the offending behavior and app. That is where Malwarebytes for Android can help—by identifying these apps and removing them.

It also helps to use authentication methods which are harder to phish. MFA is vital to enable, and will protect you from many types of attacks, so please continue to use it. However, authentication-in-the-middle attacks only work with certain types of MFA, and passkeys for example won’t allow the cybercriminals to login to your account in this way.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Threat Actors Use Obscure or Self-Made Link Shortener Services for Credential Harvesting

18 June 2024 at 10:11

An illustration of a door with a shortened link on it leading to a red lit room.

Threat Actors Use Obscure or Self-Made Link Shortener Services for Credential Harvesting Earlier this month our expert takedown team responded to a bad actor that used link shortener services to obfuscate a link to a phishing page that impersonated one of our financial institution customers. The destination was a sign-in webpage presenting malicious content including […]

The post Threat Actors Use Obscure or Self-Made Link Shortener Services for Credential Harvesting first appeared on alluresecurity.

The post Threat Actors Use Obscure or Self-Made Link Shortener Services for Credential Harvesting appeared first on Security Boulevard.

The TIDE: UNC5537, SCARLETEEL, new Threat Object Stubs, and now 303 defensive solution mappings (our biggest release yet!)

18 June 2024 at 09:57

In the latest edition of The TIDE: Threat-Informed Defense Education, we’re announcing new threat intelligence highlights, new direction for our Community Edition users, as well as the biggest release we’ve had yet of defensive technologies. It’s an exciting time at Tidal.

First up, I’m excited to share about Threat Object Stubs. In the past, if a user searched in Tidal Cyber Community Edition for an Enterprise Edition exclusive threat, they would have been left with the dreaded “no results.” Starting today, they will no longer see nothing, and instead see the threat object, its relationships to other objects, and references.

The post The TIDE: UNC5537, SCARLETEEL, new Threat Object Stubs, and now 303 defensive solution mappings (our biggest release yet!) appeared first on Security Boulevard.

Feeding the Phishes

18 June 2024 at 09:21

PHISHING SCHOOL

Bypassing Phishing Link Filters

You could have a solid pretext that slips right by your target secure email gateway (SEG); however, if your link looks too sketchy (or, you know, “smells phishy”), your phish could go belly-up before it even gets a bite. That’s why I tend to think of link filters as their own separate control. Let’s talk briefly about how these link filters work and then explore some ways we might be able to bypass them.

What the Filter? (WTF)

Over the past few years, I’ve noticed a growing interest in detecting phishing based on the links themselves–or, at least, there are several very popular SEGs that place a very high weight on the presence of a link in an email. I’ve seen so much of this that I have made this type of detection one of my first troubleshooting steps when a SEG blocks me. I’ll simply remove all links from an email and check if the message content gets through.

In at least one case, I encountered a SEG that blocked ANY email that contained a link to ANY unrecognized domain, no matter what the wording or subject line said. In this case, I believe my client was curating a list of allowed domains and instructed the SEG to block everything else. It’s an extreme measure, but I think it is a very valid concern. Emails with links are inherently riskier than emails that do not contain links; therefore, most modern SEGs will increase the SPAM score of any message that contains a link and often will apply additional scrutiny to the links themselves.

How Link Filters Work — Finding the Links

If a SEG filters links in an email, it will first need to detect/parse each link in the content. To do this, almost any experienced software engineer will directly go to using Regular Expressions (“regex” for short):

Stand back! I know regular expressions

To which, any other experienced software engineer will be quick to remind us that while regex is extremely powerful, it is also easy to screw up:

99 problems (and regex is one)

As an example, here are just a few of the top regex filters I found for parsing links on stackoverflow:

(http|ftp|https):\/\/([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])

(?:(?:https?|ftp|file):\/\/|www\.|ftp\.)(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[-A-Z0–9+&@#\/%=~_|$?!:,.])*(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[A-Z0–9+&@#\/%=~_|$])

(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-&?=%.]+

([\w+]+\:\/\/)?([\w\d-]+\.)*[\w-]+[\.\:]\w+([\/\?\=\&\#\.]?[\w-]+)*\/?

(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0–9%])|www\d{0,3}[.]|[a-z0–9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:’”.,<>?«»“”‘’]))

Don’t worry if you don’t know what any of these mean. I consider myself to be well versed in regex and even I have no opinion on which of these options would be better than the others. However, there are a couple things I would like to note from these examples:

  1. There is no “right” answer; URLs can be very complex
  2. Most (but not all) are looking for strings that start with “http” or something similar

These are also some of the most popular (think “smart people”) answers to this problem of parsing links. I could also imagine that some software engineers would take a more naive approach of searching for all anchor (“<a>”) HTML tags or looking for “href=” to indicate the start of a link. No matter which solution the software engineer chooses, there are likely going to be at least some valid URLs that their parser doesn’t catch and might leave room for a categorical bypass. We might also be able to evade parsers if we can avoid the common indicators like “http” or break up our link into multiple sections.

Side Note: Did you see that some of these popular URL parsers account for FTP and some don’t? Did you know that most browsers can connect to FTP shares? Have you ever tried to deliver a phishing payload over an anonymous FTP link?

How Link Filters Work — Filtering the Links

Once a SEG has parsed out all the links in an email, how should it determine which ones look legitimate and which ones don’t? Most SEGs these days look at two major factors for each link:

  1. The reputation of the domain
  2. How the link “looks”

Checking the domain reputation is pretty straightforward; you just split the link to see what’s between the first two forward slashes (“//”) and the next forward slash (“/”) and look up the resulting domain or subdomain on Virustotal or similar. Many SEGs will share intelligence on known bad domains with other security products and vice versa. If your domain has been flagged as malicious, the SEG will either block the email or remove the link.

As far as checking how the link “looks”, most SEGs these days use artificial intelligence or machine learning (i.e., AI/ML) to categorize links as malicious or benign. These AI models have been trained on a large volume of known-bad links and can detect themes and patterns commonly used by SPAM authors. As phishers, I think it’s important for us to focus on the “known-bad” part of that statement.

I’ve seen one researcher’s talk who claimed their AI model was able to detect over 98% of malicious links from their training data. At first glance, this seems like a very impressive number; however, we need to keep in mind that in order to have a training set of malicious links in the first place, humans had to detect 100% of the training set as malicious. Therefore, the AI model was only 98% as good as a human at detecting phishing links solely on the “look” of the link. I would imagine that it would do much worse on a set of unknown-bad links, if there was a way to hypothetically attain such a set. To slip through the cracks, we should aim to put our links in that unknown-bad category.

Even though we are up against AI models, I like to remind myself that these models can only be trained on human-curated data and therefore can only theoretically approach the competence of a human, but not surpass humans at this task. If we can make our links look convincing enough for a human, the AI should not give us any trouble.

Bypassing Link Filters

Given what we now know about how link filters work, we should have two main tactics available to us for bypassing the filter:

  1. Format our link so that it slips through the link parsing phase
  2. Make our link “look” more legitimate

If the parser doesn’t register our link as a link, then it can’t apply any additional scrutiny to the location. If we can make our link location look like some legitimate link, then even if we can’t bypass the parser, we might get the green light anyway. Please note that these approaches are not mutually exclusive and you might have greater success mixing techniques.

Bypassing the Parser

Don’t use an anchor tag

One of the most basic parser bypasses I have found for some SEGs is to simply leave the link URL in plaintext by removing the hyperlink in Outlook. Normally, link URLs are placed in the “hypertext reference” (href) attribute of an HTML anchor tag (<a>). As I mentioned earlier, one naive but surprisingly common solution for parsing links is to use an HTML parsing library like BeautifulSoup in Python. For example:

soup = BeautifulSoup(email.content, 'html.parser')
links = soup.find_all("a") # Find all elements with the tag <a>
for link in links:
print("Link:", link.get("href"), "Text:", link.string)

Any SEG that uses this approach to parse links won’t see a URL outside of an anchor tag. While a URL that is not a clickable link might look a little odd to the end user, it’s generally worth the tradeoff when this bypass works. In many cases, mail clients will parse and display URLs as hyperlinks even if they are not in an anchor tag; therefore, there is usually little to no downside of using this technique.

Use a Base Tag (a.k.a BaseStriker Attack)

One interesting method of bypassing some link filters is to use a little-known HTML tag called “base”. This tag allows you to set the base domain for any links that use relative references (i.e., links with hrefs that start with “/something” instead of direct references like “https://example.com/something”). In this case, the “https://example.com” would be considered the “base” of the URL. By defining the base using the HTML base tag in the header of the HTML content, you can then use just relative references in the body of the message. While HTML headers frequently contain URLs for things like CSS or XML schemas, the header is usually not expected to contain anything malicious and may be overlooked by a link parser. This technique is known as the “BaseStriker” attack and has been known to work against some very popular SEGs:

https://www.cyberdefensemagazine.com/basestriker-attack-technique-allow-to-bypass-microsoft-office-365-anti-phishing-filter/

The reason why this technique works is because you essentially break your link into two pieces: the domain is in the HTML headers, and the rest of the URL is in your anchor tags in the body. Because the hrefs for the anchor tags don’t start with “https://” they aren’t detected as links.

Scheming Little Bypasses

The first part of a URL, before the colon and forward slashes, is what’s known as the “scheme”:

URI = scheme ":" ["//" authority] path ["?" query] ["#" fragment]

As mentioned earlier, some of the more robust ways to detect URLs is by looking for anything that looks like a scheme (e.g. “http://”, or “https://”), followed by a sequence of characters that would be allowed in a URL. If we simply leave off the scheme, many link parsers will not be able to detect our URL, but it will still look like a URL to a human:

accounts.goooogle.com/login?id=34567

A human might easily be convinced to simply copy and paste this link into their browser for us. In addition, there are quite a few legitimate schemes that could open a program on our target user’s system and potentially slip through a URL parser that is only looking for web links:

https://en.wikipedia.org/wiki/List_of_URI_schemes

There are at least a few that could be very useful as phishing links ;)

QR Phishing

What if there isn’t a link in the email at all? What if it’s an image instead? You can use a tool like SquarePhish to automate phishing with QR codes instead of traditional links:

GitHub - secureworks/squarephish

I haven’t played with this yet, but have heard good things from friends that have used similar techniques. If you want to play with automating this attack yourself, NodeJS has a simple library for generating QRs:

qrcode

Bypassing the Filter

Don’t Mask

(Hold on. I need to get on my soapbox…) I can’t count how many times I’ve been blocked because of a masked link only to find that unmasking the link would get the same pretext through. I think this is because spammers have thoroughly abused this feature of anchor tags in the past and average email users seldom use masked links. Link filters tend to see masked links as far more dangerous than regular links; therefore, just use a regular link. It seems like everyone these days knows how to hover a link and check its real location anyway, so masked links are even bad at tricking humans now. Don’t be cliche. Don’t use masked links.

Use Categorized Domains

Many link filters block or remove links to domains that are either uncategorized, categorized as malicious, or were recently registered. Therefore, it’s generally a good idea to use domains that have been parked long enough to be categorized. We’ve already touched on this in “One Phish Two Phish, Red Teams Spew Phish”, so I’ll skip the process of getting a good domain; however, just know that the same rules apply here.

Use “Legitimate” Domains

If you don’t want to go through all the trouble of maintaining categorized domains for phishing links, there are some generally trustworthy domains you can leverage instead. One example I recently saw “in-the-wild” was a spammer using a sites.google.com link. They just hosted their phishing page on Google! I thought this was brilliant because I would expect most link filters to allow Google, and even most end users would think anything on google.com must be legit. Some other similar examples would be hosting your phishing sites as static pages on GitHub, in an S3 bucket, other common content delivery networks (CDNs), or on SharePoint, etc. There are tons of seemingly “legitimate” sites that allow users to host pages of arbitrary HTML content.

Arbitrary Redirects

Along the same lines as hosting your phishing site on a trusted domain is using trusted domains to redirect to your phishing site. One classic example of this would be link shorteners like TinyURL. While TinyURL has been abused for SPAM to the point that I would expect most SEGs to block TinyURL links, it does demonstrate the usefulness of arbitrary redirects.

A more useful form of arbitrary redirect for bypassing link filters are URLs with either cross-site scripting (XSS) vulnerabilities that allow us to specify a ‘window.location’ change or URLs that take an HTTP GET parameter specifying where the page should redirect to. As part of my reconnaissance phase, I like to spend at least a few minutes on the main website of my target to look for these types of vulnerabilities. These vulnerabilities are surprisingly common and while an arbitrary redirect might be considered a low-risk finding on a web application penetration test report, they can be extremely useful when combined with phishing. Your links will point to a URL on your target organization’s main website. It is extremely unlikely that a link filter or even a human will see the danger. In some cases, you may find that your target organization has configured an explicit allow list in the SEG for links that point to their domains.

Link to an Attachment

Did you know that links in an email can also point to an email attachment? Instead of providing a URL in the href of your anchor tag, you can specify the content identifier (CID) of the attachment (e.g. href=“cid:mycontentidentifier@content.net”). One way I have used this trick to bypass link filters is to link to an HTML attachment and use obfuscated JavaScript to redirect the user to the phishing site. Because our href does not look like a URL, most SEGs will think our link is benign. You could also link to a PDF, DOCX, or several other usually allowed file types that then contain the real phishing link. This might require a little more setup in your pretext to instruct the user, or just hope that they will click the link after opening the document. In this case, I think it makes the most sense to add any additional instructions inside the document where the contents are less likely to be scrutinized by the SEG’s content filter.

Pick Up The Phone

This blog rounds out our “message inbound” controls that we have to bypass for social engineering pretexts. It would not be complete without mentioning one of the simplest bypasses of them all:

Not using email!

If you pick up the phone and talk directly to your target, your pretext travels from your mouth, through the phone, then directly into their ear, and hits their brain without ever passing through a content or reputation filter.

Along the same lines, Zoom calls, Teams chats, LinkedIn messaging, and just about any other common business communication channel will likely be subject to far fewer controls than email. I’ve trained quite a few red teamers who prefer phone calls over emails because it greatly simplifies their workflow. Just a few awkward calls is usually all it takes to cede access to a target environment.

More interactive forms of communication, like phone calls, also allow you to gauge how the target is feeling about your pretext in real time. It’s usually obvious within seconds whether someone believes you and wants to help or if they think you’re full of it and it’s time to cut your losses, hang up the phone, and try someone else. You can also use phone calls as a way to prime a target for a follow-up email to add perceived legitimacy. Getting our message to the user is half the battle, and social engineering phone calls can be a powerful shortcut.

In Summary

If you need to bypass a link filter, either:

  1. Make your link look like it’s not a link
  2. Make your link look like a “legitimate” link

People still use links in emails all the time. You just need to blend in with the “real” ones and you can trick the filter. If you are really in a pinch, just call your targets instead. It feels more personal, but it gets the job done quickly.


Feeding the Phishes was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Feeding the Phishes appeared first on Security Boulevard.

Raising Our Glasses to Cequence: We’ve Built One of The Best Workplaces in The Nation!

18 June 2024 at 09:00

At Cequence Security, our journey has always been driven by a deep commitment to our team. We believe that a company’s culture isn’t just about words on a website or slogans on a wall—it’s about how our people feel, especially when the weekend draws to a close. It’s about trust, curiosity, drive, humor, and heart. […]

The post Raising Our Glasses to Cequence: We’ve Built One of The Best Workplaces in The Nation! appeared first on Cequence Security.

The post Raising Our Glasses to Cequence: We’ve Built One of The Best Workplaces in The Nation! appeared first on Security Boulevard.

43% of couples experience pressure to share logins and locations, Malwarebytes finds

18 June 2024 at 09:00

All isn’t fair in love and romance today, as 43% of people in a committed relationship said they have felt pressured by their own partners to share logins, passcodes, and/or locations. A worrying 7% admitted that this type of pressure has included the threat of breaking up or the threat of physical or emotional harm.

These are latest findings from original research conducted by Malwarebytes to explore how romantic couples navigate shared digital access to one another’s devices, accounts, and location information.

In short, digital sharing is the norm in modern relationships, but it doesn’t come without its fears.

While everybody shares some type of device, account, or location access with their significant other (100% of respondents), and plenty grant their significant other access to at least one personal account (85%), a sizeable portion longs for something different—31% said they worry about “how easy it is for my partner to track what I’m doing and where I am all times because of how much we share,” and 40% worry that “telling my partner I don’t want to share logins, PINs, and/or locations would upset them.”

By surveying 500 people in committed relationships in the United States, Malwarebytes has captured a unique portrait of what it means to date, marry, and be in love in 2024—a part of life that is now inseparable from smart devices, apps, and the internet at large.

The complete findings can be found in the latest report, “What’s mine is yours: How couples share an all-access pass to their digital lives.” You can read the full report below.

Here are some of the key findings:

  • Partners share their personal login information for an average of 12 different types of accounts.
  • 48% of partners share the login information of their personal email accounts.
  • 30% of partners regret sharing location tracking.
  • 18% of partners regret sharing account access. The number is significantly higher for men (30%).
  • 29% of partners said an ex-partner used their accounts to track their location, impersonate them, access their financial accounts, and other harms.
  • Around one in three Gen Z and Millennial partners report an ex has used their accounts to stalk them.

But the data doesn’t only point to causes for concern. It also highlights an opportunity for learning. As Malwarebytes reveals in this latest research, people are looking for guidance, with seven in 10 people admitting they want help navigating digital co-habitation.

According to one Gen Z survey respondent:

“I feel like it might take some effort (to digitally disentangle) because we are more seriously involved. We have many other kinds of digital ties that we would have to undo in order to break free from one another.”

That is why, today, Malwarebytes is also launching its online resource hub: Modern Love in the Digital Age. At this new guidance portal, readers can learn about whether they should share their locations with their partners, why car location tracking presents a new problem for some couples, and how they can protect themselves from online harassment. Access the hub below.

CISA Releases One Industrial Control Systems Advisory

By: CISA
18 June 2024 at 08:00

CISA released one Industrial Control Systems (ICS) advisory on June 18, 2024. These advisories provide timely information about current security issues, vulnerabilities, and exploits surrounding ICS.

CISA encourages users and administrators to review the newly released ICS advisories for technical details and mitigations.

CISA and Partners Release Guidance for Modern Approaches to Network Access Security

By: CISA
18 June 2024 at 08:00

Today, CISA, in partnership with the Federal Bureau of Investigation (FBI), released guidance, Modern Approaches to Network Access Security, along with the following organizations: 

  • New Zealand’s Government Communications Security Bureau (GCSB); 
  • New Zealand’s Computer Emergency Response Team (CERT-NZ); and 
  • The Canadian Centre for Cyber Security (CCCS).

The guidance urges business owners of all sizes to move toward more robust security solutions—such as Zero Trust, Secure Service Edge (SSE), and Secure Access Service Edge (SASE)—that provide greater visibility of network activity. Additionally, this guidance helps organizations to better understand the vulnerabilities, threats, and practices associated with traditional remote access and VPN deployment, as well as the inherent business risk posed to an organization’s network by remote access misconfiguration.

CISA and its partners encourage leaders to review the guidance to help with the prioritization and protection of remote computing environments.

For more information and guidance on protection against the most common and impactful tactics, techniques, and procedures for network access security, visit CISA’s Cross-Sector Cybersecurity Performance Goals. For more information on zero trust, visit CISA’s Zero Trust Maturity Model

RAD Data Communications SecFlow-2

By: CISA
18 June 2024 at 08:00

View CSAF

1. EXECUTIVE SUMMARY

  • CVSS v4 8.7
  • ATTENTION: Exploitable remotely/low attack complexity/public exploits are available
  • Vendor: RAD Data Communications
  • Equipment: SecFlow-2
  • Vulnerability: Path Traversal

2. RISK EVALUATION

Successful exploitation of this vulnerability could allow an attacker to obtain files from the operating system by crafting a special request.

3. TECHNICAL DETAILS

3.1 AFFECTED PRODUCTS

The following RAD Data Communications products are affected:

  • SecFlow-2: All versions

3.2 Vulnerability Overview

3.2.1 PATH TRAVERSAL: '..\FILENAME' CWE-29

RAD SecFlow-2 devices with Hardware 0202, Firmware 4.1.01.63, and U-Boot 2010.12 allow URIs beginning with /.. for Directory Traversal, as demonstrated by reading /etc/shadow.

CVE-2019-6268 has been assigned to this vulnerability. A CVSS v3.1 base score of 7.5 has been calculated; the CVSS vector string is (AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N).

A CVSS v4 score has also been calculated for CVE-2019-6268. A base score of 8.7 has been calculated; the CVSS vector string is (CVSS4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N).

3.3 BACKGROUND

  • CRITICAL INFRASTRUCTURE SECTORS: Communications
  • COUNTRIES/AREAS DEPLOYED: Worldwide
  • COMPANY HEADQUARTERS LOCATION: Israel

3.4 RESEARCHER

CISA discovered a PoC (Proof of Concept) and reported it to RAD Data Communications.

4. MITIGATIONS

RAD Data Communications reports that SecFlow-2 is EOL (End-Of-Life) and recommends upgrading to the more secure RAD SecFlow-1p product line.

CISA recommends users take defensive measures to minimize the risk of exploitation of this vulnerability, such as:

  • Minimize network exposure for all control system devices and/or systems, ensuring they are not accessible from the internet.
  • Locate control system networks and remote devices behind firewalls and isolating them from business networks.
  • When remote access is required, use more secure methods, such as Virtual Private Networks (VPNs), recognizing VPNs may have vulnerabilities and should be updated to the most current version available. Also recognize VPN is only as secure as the connected devices.

CISA reminds organizations to perform proper impact analysis and risk assessment prior to deploying defensive measures.

CISA also provides a section for control systems security recommended practices on the ICS webpage on cisa.gov/ics. Several CISA products detailing cyber defense best practices are available for reading and download, including Improving Industrial Control Systems Cybersecurity with Defense-in-Depth Strategies.

CISA encourages organizations to implement recommended cybersecurity strategies for proactive defense of ICS assets.

Additional mitigation guidance and recommended practices are publicly available on the ICS webpage at cisa.gov/ics in the technical information paper, ICS-TIP-12-146-01B--Targeted Cyber Intrusion Detection and Mitigation Strategies.

Organizations observing suspected malicious activity should follow established internal procedures and report findings to CISA for tracking and correlation against other incidents.

No known public exploitation specifically targeting this vulnerability has been reported to CISA at this time.

5. UPDATE HISTORY

  • June 18, 2024: Initial Publication

Rethinking Democracy for the Age of AI

18 June 2024 at 07:04

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies...

The post Rethinking Democracy for the Age of AI appeared first on Security Boulevard.

Duo Charged with Operating $430 Million Dark Web Marketplace

Empire Market

Two suspected administrators of a $430 million dark web marketplace are facing the possibility of life sentences in the United States. The U.S. Department of Justice (DOJ) has charged Thomas Pavey, 38, and Raheim Hamilton, 28, with managing "Empire Market" from 2018 to 2020, and for previously selling counterfeit U.S. currency on AlphaBay, a now-defunct criminal market. The Justice Department alleges that Pavey and Hamilton facilitated nearly four million transactions on Empire Market, which involved drugs such as heroin, methamphetamine and cocaine, as well as counterfeit currency and stolen credit card information. Pavey is from Ormond Beach, Florida, and Hamilton is from Suffolk, Virginia. The indictment claims that they initially collaborated on selling counterfeit U.S. currency on AlphaBay. After AlphaBay was shut down in a global law enforcement operation in July 2017, Hamilton and Pavey launched Empire Market on February 1, 2018.

Operation of Empire Market

Empire Market featured categories such as Fraud, Drugs & Chemicals, Counterfeit Items, and Software & Malware. The indictment mentions at least one instance where counterfeit U.S. currency was sold to an undercover law enforcement agent on the platform. Transactions were conducted using cryptocurrency and the platform allowed users to even rate the sellers. Hamilton and Pavey allegedly managed Empire Market until August 22, 2020. During the investigation, the DOJ seized $75 million worth of cryptocurrency, along with cash and precious metals, though it remains unclear if these were obtained through raids on the suspects' properties.

New Dark Web Marketplaces Spring Up

This case is part of a broader trend where former users of one dark web marketplace create new platforms following law enforcement crackdowns. For example, after AlphaBay's closure, some vendors moved to create new marketplaces or tools like Skynet Market. Another notable cybercriminal forum - BreachForums - has encountered issues recently while attempting to resume operations after law enforcement actions. ShinyHunters – who had reportedly retired after tiring of the pressure of running a notorious hacker forum – returned on June 14 to announce that the forum is now under the ownership of a threat actor operating under the new handle name “Anastasia.” It’s not yet clear if the move will quell concerns that the forum has been taken over by law enforcement after a May 15 FBI-led takeover, but for now, BreachForums is up and running under its .st domain. The arrests of Pavey and Hamilton underscore the ongoing efforts by law enforcement to dismantle dark web marketplaces that facilitate illegal activities and highlight the significant legal consequences for those involved in such operations. Pavey and Hamilton are currently in custody, awaiting arraignment in a federal court in Chicago. They face numerous charges, including drug trafficking, computer fraud, counterfeiting and money laundering. Each charge carries a potential life sentence in federal prison.

NoName Carries Out Romania Cyberattack, Downs Portals of Government, Stock Exchange

Romania Government Cyberattack

Several pro-Russia hacker groups have allegedly carried out a massive Distributed Denial-of-Service (DDoS) attack in Romania on June 18, 2024. The Romania Cyberattack has affected critical websites, including the official site of Romania and portals of the country’s stock exchange and financial institutions. The attack was allegedly conducted by NoName in collaboration with the Russian Cyber Army, HackNet, and CyberDragon and Terminus. The extent of the damage, however, remains unclear. Romania Cyberattack

Details About Romania Cyberattack

According to NoName, the cyberattack was carried out on Romania for its pro-Ukraine stance in the Russia-Ukraine war. In its post on X, NoName claimed, “Together with colleagues shipped another batch of DDoS missiles to Romanian government websites.” The threat actor claimed to have attacked the following websites:
  • The Government of Romania: This is not the first time that the country’s official site was hacked. In 2022, Pro-Russia hacker group Killnet claimed to have carried out cyberattacks on websites of the government and Defense Ministry. However, at that time, the Romania Government claimed that there was no compromise of data due to the attack and the websites were soon restored.
  • National Bank of Romania: The National Bank of Romania is the central bank of Romania and was established in April 1880. Its headquarters are in the capital city of Bucharest.
  • Aedificium Bank for Housing: A banking firm that provides residential lending, home loans, savings, and financing services. It was founded in 2004 and has branches in the European Union (EU), and Europe, Middle East, and Africa (EMEA).
  • Bucharest Stock Exchange: The Bucharest Stock Exchange is the stock exchange of Romania located in Bucharest. As of 2023, there were 85 companies listed on the BVB. Romania Cyberattack
Despite the bold claims made by the NoName group, the extent of the Romania cyberattack, details of compromised data, or the motive behind the attack remain undisclosed. A visual examination of the affected organizations’ websites shows that all the listed websites are experiencing accessibility issues. These issues range from “403 Forbidden” errors to prolonged loading times, indicating a probable disruption or compromise. The situation is dynamic and continues to unravel. It is imperative to approach this information cautiously, as unverified claims in the cybersecurity world are not uncommon. The alleged NoName attack highlights the persistent threat of cyberattacks on critical entities, such as government organizations and financial institutions. However, official statements from the targeted organizations have yet to be released, leaving room for skepticism regarding the severity and authenticity of the Romania cyberattack claim. Until official communication is provided by the affected organizations, the true nature and impact of the alleged NoName attack remain uncertain.

Romania Cyberattacks Are Not Uncommon

This isn’t the first instance of NoName targeting organizations in Romania. In March this year, NoName attacked the Ministry of Internal Affairs, The Service of Special Communications, and the Central Government. In February, Over a hundred Romanian healthcare facilities were affected by a ransomware attack by an unknown hacker, with some doctors forced to resort to pen and paper.

How to Mitigate NoName DDoS attacks

Mitigation against NoName’s DDoS attacks require prolonged cloud protection tools and specialized software and filtering tools to detect the flow of traffic before it can hit the servers. In some cases, certain antivirus software can be successful in detecting threats that can be used by organizations to launch DDoS attacks. A robust and essential cyber hygiene practice to avoid threats includes patching vulnerabilities and not opening phishing emails that are specially crafted to look like urgent communications from legitimate government organizations and other spoofed entities. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

META Stealer Enhances Stealth with Cryptographic Builds in v5.0 Update

META stealer v5.0

META stealer v5.0 has recently launched, heralding a new phase of advanced and heightened features for the infostealer. This latest version introduces TLS encryption between the build and the C2 panel, a significant enhancement similar to recent updates in other leading stealers like Lumma and Vidar. The update announcement (screenshot below) emphasizes several key improvements aimed at enhancing functionality and security. This includes integration with TLS encryption, ensuring secure communication channels between the build and the control panel. This upgrade highlights the malware developer's commitment to enhance the stealer's capabilities and reach. [caption id="attachment_77605" align="alignnone" width="450"]META stealer 5.0 META stealer 5.0 details (source: X)[/caption]

Decoding the New META Stealer v5.0: Features and Capabilities

The new META Stealer v5.0 update introduces a new build system allowing users to generate unique builds tailored to their specific requirements. This system is supported by the introduction of "Stub token" currency, enabling users to create new Runtime stubs directly from the panel. This feature enhances flexibility and customization options for users. Another notable addition is the "Crypt build" option, enhancing security by encrypting builds to avoid detection during scans. This feature ensures that builds remain undetected at scan time, reinforcing the stealer's stealth capabilities, thus creating the perfect hindering plan for the information stealer. Additionally, the update includes improvements to the panel's security and licensing systems. The redesigned panel incorporates enhanced protection measures, while the revamped licensing system aims to reduce operational disruptions for users.

Previous META Stealer Promises and Upgrades 

The makers of META Stealer released the new update on June 17th, 2024 with a special focus on implementing a new system for generating unique stubs per user. This approach enhances individualized security and also highlights the stealer's commitment to continuous improvement and user satisfaction. Previously, in February 2023, META Stealer underwent significant updates with version 4.3. This update introduced features such as enhanced detection cleaning, the ability to create builds in multiple formats (including *.vbs and *.js), and integration with Telegram for build creation. These enhancements demonstrate META stealer's commitment to target unsuspecting victims.  META stealer continues to evolve with each update, reinforcing its position as a versatile and robust information stealer designed to meet the diverse needs of its user base while continuing targeting victims globally. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

NHS Ransomware Attack: What Makes Healthcare a Prime Target for Ransomware? – Source: www.databreachtoday.com

nhs-ransomware-attack:-what-makes-healthcare-a-prime-target-for-ransomware?-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Fraud Management & Cybercrime , Healthcare , Industry Specific Rubrik’s Steve Stone on Reducing Data-Related Vulnerabilities in Healthcare June 18, 2024     Steve Stone, head of Zero Labs, Rubrik The recent ransomware attack on a key UK National Health Service IT vendor has forced two London hospitals to reschedule […]

La entrada NHS Ransomware Attack: What Makes Healthcare a Prime Target for Ransomware? – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Hackers Plead Guilty After Breaching Law Enforcement Portal – Source: www.databreachtoday.com

hackers-plead-guilty-after-breaching-law-enforcement-portal-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Cybercrime , Fraud Management & Cybercrime , Government Justice Says Sagar Steven Singh and Nicholas Ceraolo Doxed and Threatened Victims Chris Riotta (@chrisriotta) • June 17, 2024     Image: Shutterstock Two hackers pleaded guilty Monday in federal court to conspiring to commit computer intrusion and aggravated identity theft. Authorities […]

La entrada Hackers Plead Guilty After Breaching Law Enforcement Portal – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Police Dismantle Asian Crime Ring Behind $25M Android Fraud – Source: www.databreachtoday.com

police-dismantle-asian-crime-ring-behind-$25m-android-fraud-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Fraud Management & Cybercrime , Geo Focus: Asia , Geo-Specific Hackers Used Dozens of Servers to Distribute Malicious Android Apps Jayant Chakravarti (@JayJay_Tech) • June 17, 2024     The Singapore Police Force arrested a man they said is a cybercrime ringleader from Malaysia. (Image: Public Affairs Department, Singapore Police […]

La entrada Police Dismantle Asian Crime Ring Behind $25M Android Fraud – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

CISA Conducts First-Ever AI Security Incident Response Drill – Source: www.databreachtoday.com

cisa-conducts-first-ever-ai-security-incident-response-drill-–-source:-wwwdatabreachtoday.com

Source: www.databreachtoday.com – Author: 1 Artificial Intelligence & Machine Learning , Governance & Risk Management , Government US Cyber Defense Agency Developing AI Security Incident Collaboration Playbook Chris Riotta (@chrisriotta) • June 17, 2024     The Cybersecurity and Infrastructure Security Agency is crafting a comprehensive framework to unify government, industry and global partners in […]

La entrada CISA Conducts First-Ever AI Security Incident Response Drill – Source: www.databreachtoday.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Signal Foundation Warns Against EU's Plan to Scan Private Messages for CSAM

By: Newsroom
18 June 2024 at 12:22
A controversial proposal put forth by the European Union to scan users' private messages for detection child sexual abuse material (CSAM) poses severe risks to end-to-end encryption (E2EE), warned Meredith Whittaker, president of the Signal Foundation, which maintains the privacy-focused messaging service of the same name. "Mandating mass scanning of private communications fundamentally

Rethinking Democracy for the Age of AI

18 June 2024 at 07:04

There is a lot written about technology’s threats to democracy. Polarization. Artificial intelligence. The concentration of wealth and power. I have a more general story: The political and economic systems of governance that were created in the mid-18th century are poorly suited for the 21st century. They don’t align incentives well. And they are being hacked too effectively.

At the same time, the cost of these hacked systems has never been greater, across all human history. We have become too powerful as a species. And our systems cannot keep up with fast-changing disruptive technologies.

We need to create new systems of governance that align incentives and are resilient against hacking … at every scale. From the individual all the way up to the whole of society.

For this, I need you to drop your 20th century either/or thinking. This is not about capitalism versus communism. It’s not about democracy versus autocracy. It’s not even about humans versus AI. It’s something new, something we don’t have a name for yet. And it’s “blue sky” thinking, not even remotely considering what’s feasible today.

Throughout this talk, I want you to think of both democracy and capitalism as information systems. Socio-technical information systems. Protocols for making group decisions. Ones where different players have different incentives. These systems are vulnerable to hacking and need to be secured against those hacks.

We security technologists have a lot of expertise in both secure system design and hacking. That’s why we have something to add to this discussion.

And finally, this is a work in progress. I’m trying to create a framework for viewing governance. So think of this more as a foundation for discussion, rather than a road map to a solution. And I think by writing, and what you’re going to hear is the current draft of my writing—and my thinking. So everything is subject to change without notice.

OK, so let’s go.

We all know about misinformation and how it affects democracy. And how propagandists have used it to advance their agendas. This is an ancient problem, amplified by information technologies. Social media platforms that prioritize engagement. “Filter bubble” segmentation. And technologies for honing persuasive messages.

The problem ultimately stems from the way democracies use information to make policy decisions. Democracy is an information system that leverages collective intelligence to solve political problems. And then to collect feedback as to how well those solutions are working. This is different from autocracies that don’t leverage collective intelligence for political decision making. Or have reliable mechanisms for collecting feedback from their populations.

Those systems of democracy work well, but have no guardrails when fringe ideas become weaponized. That’s what misinformation targets. The historical solution for this was supposed to be representation. This is currently failing in the US, partly because of gerrymandering, safe seats, only two parties, money in politics and our primary system. But the problem is more general.

James Madison wrote about this in 1787, where he made two points. One, that representatives serve to filter popular opinions, limiting extremism. And two, that geographical dispersal makes it hard for those with extreme views to participate. It’s hard to organize. To be fair, these limitations are both good and bad. In any case, current technology—social media—breaks them both.

So this is a question: What does representation look like in a world without either filtering or geographical dispersal? Or, how do we avoid polluting 21st century democracy with prejudice, misinformation and bias. Things that impair both the problem solving and feedback mechanisms.

That’s the real issue. It’s not about misinformation, it’s about the incentive structure that makes misinformation a viable strategy.

This is problem No. 1: Our systems have misaligned incentives. What’s best for the small group often doesn’t match what’s best for the whole. And this is true across all sorts of individuals and group sizes.

Now, historically, we have used misalignment to our advantage. Our current systems of governance leverage conflict to make decisions. The basic idea is that coordination is inefficient and expensive. Individual self-interest leads to local optimizations, which results in optimal group decisions.

But this is also inefficient and expensive. The U.S. spent $14.5 billion on the 2020 presidential, senate and congressional elections. I don’t even know how to calculate the cost in attention. That sounds like a lot of money, but step back and think about how the system works. The economic value of winning those elections are so great because that’s how you impose your own incentive structure on the whole.

More generally, the cost of our market economy is enormous. For example, $780 billion is spent world-wide annually on advertising. Many more billions are wasted on ventures that fail. And that’s just a fraction of the total resources lost in a competitive market environment. And there are other collateral damages, which are spread non-uniformly across people.

We have accepted these costs of capitalism—and democracy—because the inefficiency of central planning was considered to be worse. That might not be true anymore. The costs of conflict have increased. And the costs of coordination have decreased. Corporations demonstrate that large centrally planned economic units can compete in today’s society. Think of Walmart or Amazon. If you compare GDP to market cap, Apple would be the eighth largest country on the planet. Microsoft would be the tenth.

Another effect of these conflict-based systems is that they foster a scarcity mindset. And we have taken this to an extreme. We now think in terms of zero-sum politics. My party wins, your party loses. And winning next time can be more important than governing this time. We think in terms of zero-sum economics. My product’s success depends on my competitors’ failures. We think zero-sum internationally. Arms races and trade wars.

Finally, conflict as a problem-solving tool might not give us good enough answers anymore. The underlying assumption is that if everyone pursues their own self interest, the result will approach everyone’s best interest. That only works for simple problems and requires systemic oppression. We have lots of problems—complex, wicked, global problems—that don’t work that way. We have interacting groups of problems that don’t work that way. We have problems that require more efficient ways of finding optimal solutions.

Note that there are multiple effects of these conflict-based systems. We have bad actors deliberately breaking the rules. And we have selfish actors taking advantage of insufficient rules.

The latter is problem No. 2: What I refer to as “hacking” in my latest book: “A Hacker’s Mind.” Democracy is a socio-technical system. And all socio-technical systems can be hacked. By this I mean that the rules are either incomplete or inconsistent or outdated—they have loopholes. And these can be used to subvert the rules. This is Peter Thiel subverting the Roth IRA to avoid paying taxes on $5 billion in income. This is gerrymandering, the filibuster, and must-pass legislation. Or tax loopholes, financial loopholes, regulatory loopholes.

In today’s society, the rich and powerful are just too good at hacking. And it is becoming increasingly impossible to patch our hacked systems. Because the rich use their power to ensure that the vulnerabilities don’t get patched.

This is bad for society, but it’s basically the optimal strategy in our competitive governance systems. Their zero-sum nature makes hacking an effective, if parasitic, strategy. Hacking isn’t a new problem, but today hacking scales better—and is overwhelming the security systems in place to keep hacking in check. Think about gun regulations, climate change, opioids. And complex systems make this worse. These are all non-linear, tightly coupled, unrepeatable, path-dependent, adaptive, co-evolving systems.

Now, add into this mix the risks that arise from new and dangerous technologies such as the internet or AI or synthetic biology. Or molecular nanotechnology, or nuclear weapons. Here, misaligned incentives and hacking can have catastrophic consequences for society.

This is problem No. 3: Our systems of governance are not suited to our power level. They tend to be rights based, not permissions based. They’re designed to be reactive, because traditionally there was only so much damage a single person could do.

We do have systems for regulating dangerous technologies. Consider automobiles. They are regulated in many ways: drivers licenses + traffic laws + automobile regulations + road design. Compare this to aircrafts. Much more onerous licensing requirements, rules about flights, regulations on aircraft design and testing and a government agency overseeing it all day-to-day. Or pharmaceuticals, which have very complex rules surrounding everything around researching, developing, producing and dispensing. We have all these regulations because this stuff can kill you.

The general term for this kind of thing is the “precautionary principle.” When random new things can be deadly, we prohibit them unless they are specifically allowed.

So what happens when a significant percentage of our jobs are as potentially damaging as a pilot’s? Or even more damaging? When one person can affect everyone through synthetic biology. Or where a corporate decision can directly affect climate. Or something in AI or robotics. Things like the precautionary principle are no longer sufficient. Because breaking the rules can have global effects.

And AI will supercharge hacking. We have created a series of non-interoperable systems that actually interact and AI will be able to figure out how to take advantage of more of those interactions: finding new tax loopholes or finding new ways to evade financial regulations. Creating “micro-legislation” that surreptitiously benefits a particular person or group. And catastrophic risk means this is no longer tenable.

So these are our core problems: misaligned incentives leading to too effective hacking of systems where the costs of getting it wrong can be catastrophic.

Or, to put more words on it: Misaligned incentives encourage local optimization, and that’s not a good proxy for societal optimization. This encourages hacking, which now generates greater harm than at any point in the past because the amount of damage that can result from local optimization is greater than at any point in the past.

OK, let’s get back to the notion of democracy as an information system. It’s not just democracy: Any form of governance is an information system. It’s a process that turns individual beliefs and preferences into group policy decisions. And, it uses feedback mechanisms to determine how well those decisions are working and then makes corrections accordingly.

Historically, there are many ways to do this. We can have a system where no one’s preference matters except the monarch’s or the nobles’ or the landowners’. Sometimes the stronger army gets to decide—or the people with the money.

Or we could tally up everyone’s preferences and do the thing that at least half of the people want. That’s basically the promise of democracy today, at its ideal. Parliamentary systems are better, but only in the margins—and it all feels kind of primitive. Lots of people write about how informationally poor elections are at aggregating individual preferences. It also results in all these misaligned incentives.

I realize that democracy serves different functions. Peaceful transition of power, minimizing harm, equality, fair decision making, better outcomes. I am taking for granted that democracy is good for all those things. I’m focusing on how we implement it.

Modern democracy uses elections to determine who represents citizens in the decision-making process. And all sorts of other ways to collect information about what people think and want, and how well policies are working. These are opinion polls, public comments to rule-making, advocating, lobbying, protesting and so on. And, in reality, it’s been hacked so badly that it does a terrible job of executing on the will of the people, creating further incentives to hack these systems.

To be fair, the democratic republic was the best form of government that mid 18th century technology could invent. Because communications and travel were hard, we needed to choose one of us to go all the way over there and pass laws in our name. It was always a coarse approximation of what we wanted. And our principles, values, conceptions of fairness; our ideas about legitimacy and authority have evolved a lot since the mid 18th century. Even the notion of optimal group outcomes depended on who was considered in the group and who was out.

But democracy is not a static system, it’s an aspirational direction. One that really requires constant improvement. And our democratic systems have not evolved at the same pace that our technologies have. Blocking progress in democracy is itself a hack of democracy.

Today we have much better technology that we can use in the service of democracy. Surely there are better ways to turn individual preferences into group policies. Now that communications and travel are easy. Maybe we should assign representation by age, or profession or randomly by birthday. Maybe we can invent an AI that calculates optimal policy outcomes based on everyone’s preferences.

Whatever we do, we need systems that better align individual and group incentives, at all scales. Systems designed to be resistant to hacking. And resilient to catastrophic risks. Systems that leverage cooperation more and conflict less. And are not zero-sum.

Why can’t we have a game where everybody wins?

This has never been done before. It’s not capitalism, it’s not communism, it’s not socialism. It’s not current democracies or autocracies. It would be unlike anything we’ve ever seen.

Some of this comes down to how trust and cooperation work. When I wrote “Liars and Outliers” in 2012, I wrote about four systems for enabling trust: our innate morals, concern about our reputations, the laws we live under and security technologies that constrain our behavior. I wrote about how the first two are more informal than the last two. And how the last two scale better, and allow for larger and more complex societies. They enable cooperation amongst strangers.

What I didn’t appreciate is how different the first and last two are. Morals and reputation are both old biological systems of trust. They’re person to person, based on human connection and cooperation. Laws—and especially security technologies—are newer systems of trust that force us to cooperate. They’re socio-technical systems. They’re more about confidence and control than they are about trust. And that allows them to scale better. Taxi driver used to be one of the country’s most dangerous professions. Uber changed that through pervasive surveillance. My Uber driver and I don’t know or trust each other, but the technology lets us both be confident that neither of us will cheat or attack each other. Both drivers and passengers compete for star rankings, which align local and global incentives.

In today’s tech-mediated world, we are replacing the rituals and behaviors of cooperation with security mechanisms that enforce compliance. And innate trust in people with compelled trust in processes and institutions. That scales better, but we lose the human connection. It’s also expensive, and becoming even more so as our power grows. We need more security for these systems. And the results are much easier to hack.

But here’s the thing: Our informal human systems of trust are inherently unscalable. So maybe we have to rethink scale.

Our 18th century systems of democracy were the only things that scaled with the technology of the time. Imagine a group of friends deciding where to have dinner. One is kosher, one is a vegetarian. They would never use a winner-take-all ballot to decide where to eat. But that’s a system that scales to large groups of strangers.

Scale matters more broadly in governance as well. We have global systems of political and economic competition. On the other end of the scale, the most common form of governance on the planet is socialism. It’s how families function: people work according to their abilities, and resources are distributed according to their needs.

I think we need governance that is both very large and very small. Our catastrophic technological risks are planetary-scale: climate change, AI, internet, bio-tech. And we have all the local problems inherent in human societies. We have very few problems anymore that are the size of France or Virginia. Some systems of governance work well on a local level but don’t scale to larger groups. But now that we have more technology, we can make other systems of democracy scale.

This runs headlong into historical norms about sovereignty. But that’s already becoming increasingly irrelevant. The modern concept of a nation arose around the same time as the modern concept of democracy. But constituent boundaries are now larger and more fluid, and depend a lot on context. It makes no sense that the decisions about the “drug war”—or climate migration—are delineated by nation. The issues are much larger than that. Right now there is no governance body with the right footprint to regulate Internet platforms like Facebook. Which has more users world-wide than Christianity.

We also need to rethink growth. Growth only equates to progress when the resources necessary to grow are cheap and abundant. Growth is often extractive. And at the expense of something else. Growth is how we fuel our zero-sum systems. If the pie gets bigger, it’s OK that we waste some of the pie in order for it to grow. That doesn’t make sense when resources are scarce and expensive. Growing the pie can end up costing more than the increase in pie size. Sustainability makes more sense. And a metric more suited to the environment we’re in right now.

Finally, agility is also important. Back to systems theory, governance is an attempt to control complex systems with complicated systems. This gets harder as the systems get larger and more complex. And as catastrophic risk raises the costs of getting it wrong.

In recent decades, we have replaced the richness of human interaction with economic models. Models that turn everything into markets. Market fundamentalism scaled better, but the social cost was enormous. A lot of how we think and act isn’t captured by those models. And those complex models turn out to be very hackable. Increasingly so at larger scales.

Lots of people have written about the speed of technology versus the speed of policy. To relate it to this talk: Our human systems of governance need to be compatible with the technologies they’re supposed to govern. If they’re not, eventually the technological systems will replace the governance systems. Think of Twitter as the de facto arbiter of free speech.

This means that governance needs to be agile. And able to quickly react to changing circumstances. Imagine a court saying to Peter Thiel: “Sorry. That’s not how Roth IRAs are supposed to work. Now give us our tax on that $5B.” This is also essential in a technological world: one that is moving at unprecedented speeds, where getting it wrong can be catastrophic and one that is resource constrained. Agile patching is how we maintain security in the face of constant hacking—and also red teaming. In this context, both journalism and civil society are important checks on government.

I want to quickly mention two ideas for democracy, one old and one new. I’m not advocating for either. I’m just trying to open you up to new possibilities. The first is sortition. These are citizen assemblies brought together to study an issue and reach a policy decision. They were popular in ancient Greece and Renaissance Italy, and are increasingly being used today in Europe. The only vestige of this in the U.S. is the jury. But you can also think of trustees of an organization. The second idea is liquid democracy. This is a system where everybody has a proxy that they can transfer to someone else to vote on their behalf. Representatives hold those proxies, and their vote strength is proportional to the number of proxies they have. We have something like this in corporate proxy governance.

Both of these are algorithms for converting individual beliefs and preferences into policy decisions. Both of these are made easier through 21st century technologies. They are both democracies, but in new and different ways. And while they’re not immune to hacking, we can design them from the beginning with security in mind.

This points to technology as a key component of any solution. We know how to use technology to build systems of trust. Both the informal biological kind and the formal compliance kind. We know how to use technology to help align incentives, and to defend against hacking.

We talked about AI hacking; AI can also be used to defend against hacking, finding vulnerabilities in computer code, finding tax loopholes before they become law and uncovering attempts at surreptitious micro-legislation.

Think back to democracy as an information system. Can AI techniques be used to uncover our political preferences and turn them into policy outcomes, get feedback and then iterate? This would be more accurate than polling. And maybe even elections. Can an AI act as our representative? Could it do a better job than a human at voting the preferences of its constituents?

Can we have an AI in our pocket that votes on our behalf, thousands of times a day, based on the preferences it infers we have. Or maybe based on the preferences it infers we would have if we read up on the issues and weren’t swayed by misinformation. It’s just another algorithm for converting individual preferences into policy decisions. And it certainly solves the problem of people not paying attention to politics.

But slow down: This is rapidly devolving into technological solutionism. And we know that doesn’t work.

A general question to ask here is when do we allow algorithms to make decisions for us? Sometimes it’s easy. I’m happy to let my thermostat automatically turn my heat on and off or to let an AI drive a car or optimize the traffic lights in a city. I’m less sure about an AI that sets tax rates, or corporate regulations or foreign policy. Or an AI that tells us that it can’t explain why, but strongly urges us to declare war—right now. Each of these is harder because they are more complex systems: non-local, multi-agent, long-duration and so on. I also want any AI that works on my behalf to be under my control. And not controlled by a large corporate monopoly that allows me to use it.

And learned helplessness is an important consideration. We’re probably OK with no longer needing to know how to drive a car. But we don’t want a system that results in us forgetting how to run a democracy. Outcomes matter here, but so do mechanisms. Any AI system should engage individuals in the process of democracy, not replace them.

So while an AI that does all the hard work of governance might generate better policy outcomes. There is social value in a human-centric political system, even if it is less efficient. And more technologically efficient preference collection might not be better, even if it is more accurate.

Procedure and substance need to work together. There is a role for AI in decision making: moderating discussions, highlighting agreements and disagreements helping people reach consensus. But it is an independent good that we humans remain engaged in—and in charge of—the process of governance.

And that value is critical to making democracy function. Democratic knowledge isn’t something that’s out there to be gathered: It’s dynamic; it gets produced through the social processes of democracy. The term of art is “preference formation.” We’re not just passively aggregating preferences, we create them through learning, deliberation, negotiation and adaptation. Some of these processes are cooperative and some of these are competitive. Both are important. And both are needed to fuel the information system that is democracy.

We’re never going to remove conflict and competition from our political and economic systems. Human disagreement isn’t just a surface feature; it goes all the way down. We have fundamentally different aspirations. We want different ways of life. I talked about optimal policies. Even that notion is contested: optimal for whom, with respect to what, over what time frame? Disagreement is fundamental to democracy. We reach different policy conclusions based on the same information. And it’s the process of making all of this work that makes democracy possible.

So we actually can’t have a game where everybody wins. Our goal has to be to accommodate plurality, to harness conflict and disagreement, and not to eliminate it. While, at the same time, moving from a player-versus-player game to a player-versus-environment game.

There’s a lot missing from this talk. Like what these new political and economic governance systems should look like. Democracy and capitalism are intertwined in complex ways, and I don’t think we can recreate one without also recreating the other. My comments about agility lead to questions about authority and how that interplays with everything else. And how agility can be hacked as well. We haven’t even talked about tribalism in its many forms. In order for democracy to function, people need to care about the welfare of strangers who are not like them. We haven’t talked about rights or responsibilities. What is off limits to democracy is a huge discussion. And Butterin’s trilemma also matters here: that you can’t simultaneously build systems that are secure, distributed, and scalable.

I also haven’t given a moment’s thought to how to get from here to there. Everything I’ve talked about—incentives, hacking, power, complexity—also applies to any transition systems. But I think we need to have unconstrained discussions about what we’re aiming for. If for no other reason than to question our assumptions. And to imagine the possibilities. And while a lot of the AI parts are still science fiction, they’re not far-off science fiction.

I know we can’t clear the board and build a new governance structure from scratch. But maybe we can come up with ideas that we can bring back to reality.

To summarize, the systems of governance we designed at the start of the Industrial Age are ill-suited to the Information Age. Their incentive structures are all wrong. They’re insecure and they’re wasteful. They don’t generate optimal outcomes. At the same time we’re facing catastrophic risks to society due to powerful technologies. And a vastly constrained resource environment. We need to rethink our systems of governance; more cooperation and less competition and at scales that are suited to today’s problems and today’s technologies. With security and precautions built in. What comes after democracy might very well be more democracy, but it will look very different.

This feels like a challenge worthy of our security expertise.

This text is the transcript from a keynote speech delivered during the RSA Conference in San Francisco on April 25, 2023. It was previously published in Cyberscoop. I thought I posted it to my blog and Crypto-Gram last year, but it seems that I didn’t.

Helpful tools to get started in IoT Assessments

18 June 2024 at 09:00
Helpful tools to get started in IoT Assessments

The Internet of Things (IoT) can be a daunting field to get into. With many different tools and products available on the market it can be confusing to even know where to start. Having performed dozens of IoT assessments, I felt it would be beneficial to compile a basic list of items that are essential to getting started delving into the realm of testing embedded devices. The tools that will be covered in this post are primarily used to interact with the debug interface of embedded devices, however, many of them have multiple functions, from reading data from a memory chip to removing components from the physical circuit board. I would like to note that neither I, nor Rapid7, benefit in any way from the sale of any of these products. We honestly believe they are useful tools for any beginner.

1) Serial Debugger

One of the most used items when it comes to IoT testing would be a device used to interface with low-speed interfaces available on embedded devices. Gaining access to the debug interface on embedded devices is the easiest way to get a look under the hood of how the device is operating. One of the most popular and readily available devices on the market currently would be the Tigard.

Helpful tools to get started in IoT Assessments

The Tigard is a great open-source tool that has support for all the commonly used interfaces you might encounter on modern day embedded devices. It has support for Universal Asynchronous Receiver-Transmitter (UART), Joint Test Access Group (JTAG), Serial Peripheral Interface (SPI), Inter-Integrated Circuit (I2C), and Serial Wire Debug (SWD) connections. This device allows you to connect to various serial consoles or even extract the contents of commonly found flash memory chips. It is powered by a USB-C connection and also has the ability to select commonly used voltage supplies to power components when needed.

Link: https://www.crowdsupply.com/securinghw/tigard

2) PCByte Probes

A tool that saves a ton of time when it comes to connecting to serial interfaces and on-board components is a set of PCByte Probes. Without these probes, you would often have to resort to soldering on header pins or trying to attach to onboard components using probe connectors.

Helpful tools to get started in IoT Assessments

The starter level probe set includes 4 hands-free probes, a set of PCB holders, a magnetic base, and accessories. Oftentimes embedded devices contain small components on the circuit board that are not easily accessible due to size requirements. These probes allow for quick, solder-free, connections to be made to embedded devices. All you need to do is position the spring-loaded probes on areas of the circuit board and connect the included dupont wires to either a logic analyzer or a serial debugger to interface with the target device. The included circuit board holders are a nice touch to ensure the circuit board is kept firmly in position while working.

Link: https://sensepeek.com/pcbite-20

3) Rework Station

While working with embedded devices, there might be scenarios you run into that involve removing small components from the embedded device for offline analysis. There are many options for rework stations out on the internet, all with various levels of price and functionality. A model that hits the sweet spot of price and functionality is the Aoyue 968A+ Professional SMD Digital Hot Air Rework Station.

Helpful tools to get started in IoT Assessments

This rework station includes a number of tools to make any reworking job easy in one simple package. It includes a soldering iron, hot air rework gun, vacuum pickup tool, and a fume extractor. There are many times when performing embedded testing that it is necessary to either solder wires onto connections or remove components from the board for data extraction. The 70 watt soldering iron and 550 watt hot air gun provides plenty of power for quick soldering jobs and component rework.

Link: https://www.amazon.com/Aoyue-968A-Digital-Rework-Station/dp/B006FA481G?th=1

4) Logic Analyzer

Another important tool to have on hand when testing embedded devices is a logic analyzer. Many times, you will find that the debug port on an embedded device is not labeled on the circuit board. That is when a logic analyzer comes in handy to identify what various components on the board are without unnecessary guesswork. Logic analyzers are used to decode signals found on the board to identify and decode protocols such as UART, SPI, and I2C. There are many out on the market, but the sweet spot for price and functionality would be the Saleae Logic 8.

Helpful tools to get started in IoT Assessments

Saleae offers many different models of logic analyzers that all come in at different price points. Typically, the base model which supports 8 channels at a max speed of 100MS/s is sufficient for the majority, however, they do offer additional models that support a larger number of channels at higher speeds. Saleae includes the Logic 2 software which allows you to seamlessly interact with the device and identify protocols and decode signals on the board.

Link: https://usd.saleae.com/products/saleae-logic-8

As we've explored in this blog post, there are many options out on the market for conducting detailed analysis on embedded devices. Many of the tools out there are available at different price points and offer various levels of functionality and ease of interacting and interfacing with embedded devices. The goal with this guide is not to provide a comprehensive list of all available options, however to cover the basic tools used to begin your IoT journey.

Cybercriminals Exploit Free Software Lures to Deploy Hijack Loader and Vidar Stealer

By: Newsroom
18 June 2024 at 09:30
Threat actors are luring unsuspecting users with free or pirated versions of commercial software to deliver a malware loader called Hijack Loader, which then deploys an information stealer known as Vidar Stealer. "Adversaries had managed to trick users into downloading password-protected archive files containing trojanized copies of a Cisco Webex Meetings App (ptService.exe)," Trellix security

CMMC 1.0 & CMMC 2.0 – What’s Changed?

This blog delves into CMMC, the introduction of CMMC 2.0, what's changed, and what it means for your business.

The post CMMC 1.0 & CMMC 2.0 – What’s Changed? appeared first on Scytale.

The post CMMC 1.0 & CMMC 2.0 – What’s Changed? appeared first on Security Boulevard.

Start building your CRA compliance strategy now

18 June 2024 at 03:00

In March 2024, the European Parliament overwhelmingly approved the EU Cyber Resilience Act, or CRA, which will now be formally adopted with the goal of improving the cybersecurity of digital products. It sets out to do this by establishing essential requirements for manufacturers to ensure their products reach the market with fewer vulnerabilities.

The post Start building your CRA compliance strategy now appeared first on Security Boulevard.

Navigating Retail: Overcoming the Top 3 Identity Security Challenges

18 June 2024 at 01:33

As retailers compete in an increasingly competitive marketplace, they invest a great deal of resources in becoming household names. But brand recognition is a double-edged sword when it comes to cybersecurity. The bigger your name, the bigger the cyber target on your back. Data breaches in the retail sector cost an average of $3.28 million...

The post Navigating Retail: Overcoming the Top 3 Identity Security Challenges appeared first on Silverfort.

The post Navigating Retail: Overcoming the Top 3 Identity Security Challenges appeared first on Security Boulevard.

MEDUSA Ransomware Group Demands $220,000 from US Institutions, Threatens Data Exposure

MEDUSA Ransomware

Threat Actors (TAs) associated with the notorious MEDUSA ransomware have escalated their activities and have allegedly targeted two institutions in the USA. In a scenario mirroring all of its previous attacks, the group has not divulged critical information, such as the type of compromised data. It has, however, demanded a bounty of US $120,000 from Fitzgerald, DePietro & Wojnas CPAs, P.C and $100,000 from Tri-City College Prep High School to stop leaking internal data of the concerned organizations.

Understanding the MEDUSA Ransomware Attack

One of the two institutions targeted by MEDUSA is Tri-Cities Preparatory High School, a public charter middle and high school located in Prescott, Arizona, USA. The threat actor claimed to have access to 1.2 GB of the school's data and has threatened to publish it within 7-8 days. MEDUSA Ransomware Group The other organization that the group has claimed to have targeted is Fitzgerald, DePietro & Wojnas CPAs, P.C. It is an accounting firm based in Utica, New York, USA. The group claims to have access to 92.5 GB of the firm's data and has threatened to publish it within 8–9 days. Despite the tall claims made by the ransomware group, the official websites of the targeted companies seem to be fully functional, with no signs of any foul activity. The organizations, however, have not yet reacted to the alleged cyberattack, leaving the claims made by the ransomware group unverified.  The article would be updated once the respective organizations respond to the claims. The absence of confirmation raises the question of the authenticity of the ransomware claim. It remains to be seen if the tactic employed by MEDUSA group is to garner attention or if there are any ulterior motives attached to their actions. Only an official statement by the affected organizations can reveal the true nature of the situation. However, if the claims made by the MEDUSA ransomware group do turn out to be true, then the consequences could be sweeping. The potential leak of sensitive data could pose a significant threat to the affected organizations and their staff, students and employees.

Who is the MEDUSA Ransomware Group?

MEDUSA first came into limelight in June 2021 and has since launched attacks on organizations in many countries targeting multiple industries, including healthcare, education, manufacturing, and retail. Most of the victims, though, have established their base in the United States of America. MEDUSA carries out its attacks as a Ransomware-as-a-Service (RaaS) platform. It provides would-be target organizations with malicious software and infrastructure required to carry out disrupting ransomware attacks. The ransomware group also runs a public Telegram channel that TAs utilize to post data that might be stolen, which could be an attempt to extort organizations and demand ransom.

History of MEDUSA Ransomware Attacks

Last week, the Medusa group took ownership of the cyberattack on Australia’s Victoria Racing Club (VRC). To provide authenticity, Medusa shared thirty documents from the club and demanded a ransom of US$700,000 from anyone who wanted to either delete the data or else download it. The leaked data included financial details of gaming machines, prizes won by VRC members, customer invoices, marketing details, names, email addresses, and mobile phone numbers. The VRC confirmed the breach, with its chief executive Steve Rosich releasing a statement: "We are currently communicating with our employees, members, partners, and sponsors to inform them that the VRC recently experienced a cyber incident.” In 2024, MEDUSA had targeted four organizations across different countries, including France, Italy, and Spain. The group’s modus operandi remains constant, with announcements being made on their dark web forum accompanied by deadlines and ransom demands. As organizations grapple with the fallout of cyberattacks by groups like MEDUSA, it becomes critical to remain cautious and implement strategic security measures. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

Guidehouse and Nan McKay to Pay $11.3M for Cybersecurity Failures in COVID-19 Rental Assistance

Cybersecurity

Guidehouse Inc., based in McLean, Virginia, and Nan McKay and Associates, headquartered in El Cajon, California, have agreed to pay settlements totaling $11.3 million to resolve allegations under the False Claims Act. The settlements came from their failure to meet cybersecurity requirements in contracts aimed at providing secure online access for low-income New Yorkers applying for federal rental assistance during the COVID-19 pandemic.

What Exactly Happened?

In response to the economic hardships brought on by the pandemic, Congress enacted the Emergency Rental Assistance Program (ERAP) in early 2021. This initiative was designed to offer financial support to eligible low-income households in covering rent, rental arrears, utilities, and other housing-related expenses. Participating state agencies, such as New York's Office of Temporary and Disability Assistance (OTDA), were tasked with distributing federal funding to qualified tenants and landlords. Guidehouse assumed a pivotal role as the prime contractor for New York's ERAP, responsible for overseeing the ERAP technology and services. Nan McKay acted as Guidehouse's subcontractor, entrusted with delivering and maintaining the ERAP technology used by New Yorkers to submit online applications for rental assistance.

Admission of Violations and Settlement

Critical to the allegations were breaches in cybersecurity protocols. Both Guidehouse and Nan McKay admitted to failing their obligation to conduct required pre-production cybersecurity testing on the ERAP Application. Consequently, the ERAP system went live on June 1, 2021, only to be shut down twelve hours later by OTDA due to a cybersecurity breach. This data breach exposed the personally identifiable information (PII) of applicants, which was found accessible on the Internet. Guidehouse and Nan McKay acknowledged that proper cybersecurity testing could have detected and potentially prevented such breaches. Additionally, Guidehouse admitted to using a third-party data cloud software program to store PII without obtaining OTDA’s permission, violating their contractual obligations.

Government Response and Accountability

Principal Deputy Assistant Attorney General Brian M. Boynton of the Justice Department’s Civil Division emphasized the importance of adhering to cybersecurity commitments associated with federal funding. "Federal funding frequently comes with cybersecurity obligations, and contractors and grantees must honor these commitments,” said Boynton. “The Justice Department will continue to pursue knowing violations of material cybersecurity requirements aimed at protecting sensitive personal information.” U.S. Attorney Carla B. Freedman for the Northern District of New York echoed these sentiments, highlighting the necessity for federal contractors to prioritize cybersecurity obligations. “Contractors who receive federal funding must take their cybersecurity obligations seriously,” said Freedman. “We will continue to hold entities and individuals accountable when they knowingly fail to implement and follow cybersecurity requirements essential to protect sensitive information.” Acting Inspector General Richard K. Delmar of the Department of the Treasury emphasized the severe impact of these breaches on a program crucial to the government’s pandemic recovery efforts. He expressed gratitude for the partnership with the DOJ in addressing this breach and ensuring accountability. “These vendors failed to meet their data integrity obligations in a program on which so many eligible citizens depend for rental security, which jeopardized the effectiveness of a vital part of the government’s pandemic recovery effort,” said Delmar. “Treasury OIG is grateful for DOJ’s support of its oversight work to accomplish this recovery.” New York State Comptroller Thomas P. DiNapoli emphasized the critical role of protecting the integrity of programs like ERAP, vital to economic recovery. He thanked federal partners for their collaborative efforts in holding these contractors accountable. “This settlement sends a strong message to New York State contractors that there will be consequences if they fail to safeguard the personal information entrusted to them or meet the terms of their contracts,” said DiNapoli. “Rental assistance has been vital to our economic recovery, and the integrity of the program needs to be protected. I thank the United States Department of Justice, United States Attorney for the Northern District of New York Freedman and the United States Department of Treasury Office of the Inspector General for their partnership in exposing this breach and holding these vendors accountable.”

Initiative to Address Cybersecurity Risks

In response to such breaches, the Deputy Attorney General announced the Civil Cyber-Fraud Initiative on October 6, 2021. This initiative aims to hold accountable entities or individuals who knowingly endanger sensitive information through inadequate cybersecurity practices or misrepresentations. The investigation into these breaches was initiated following a whistleblower lawsuit under the False Claims Act. As part of the settlement, whistleblower Elevation 33 LLC, owned by a former Guidehouse employee, will receive approximately $1.95 million. Trial Attorney J. Jennifer Koh from the Civil Division's Commercial Litigation Branch, Fraud Section, and Assistant U.S. Attorney Adam J. Katz from the Northern District of New York led the case, with support from the Department of the Treasury OIG and the Office of the New York State Comptroller. These settlements highlight the imperative for rigorous cybersecurity measures in federal contracts, particularly in safeguarding sensitive personal information critical to public assistance programs. As the government continues to navigate evolving cybersecurity threats, it remains steadfast in enforcing accountability among contractors entrusted with protecting essential public resources.

Cybersecurity Experts Warn of Rising Malware Threats from Sophisticated Social Engineering Tactics

TA571 and ClearFake Campaign 

Cybersecurity researchers have uncovered a disturbing trend in malware delivery tactics involving sophisticated social engineering techniques. These methods exploit user trust and familiarity with PowerShell scripts to compromise systems.  Among these threat actors, the two highlighted, TA571 and ClearFake campaign, were seen leveraging social engineering for spreading malware. According to researchers, the threat actors associated with TA571 and the ClearFake cluster have been actively using a novel approach to infiltrate systems.  This technique involves manipulating users into copying and pasting malicious PowerShell scripts under the guise of resolving legitimate issues.

Understanding the TA571 and ClearFake Campaign 

[caption id="attachment_77553" align="alignnone" width="1402"]TA571 and ClearFake Campaign  Example of a ClearFake attack chain. (Source: Proofpoint)[/caption] The TA571 campaign, first observed in March 2024, distributed emails containing HTML attachments that mimic legitimate Microsoft Word error messages. These messages coerce users to execute PowerShell scripts supposedly aimed at fixing document viewing issues.  Similarly, the ClearFake campaign, identified in April 2024, employs fake browser update prompts on compromised websites. These prompts instruct users to run PowerShell scripts to install what appears to be necessary security certificates, says Proofpoint. Upon interaction with the malicious prompts, users unwittingly copy PowerShell commands to their clipboard. Subsequent instructions guide them to paste and execute these commands in PowerShell terminals or via Windows Run dialog boxes. Once executed, these scripts initiate a chain of events leading to the download and execution of malware payloads such as DarkGate, Matanbuchus, and NetSupport RAT. The complexity of these attacks is compounded by their ability to evade traditional detection methods. Malicious scripts are often concealed within double-Base64 encoded HTML elements or obscured in JavaScript, making them challenging to identify and block preemptively.

Attack Variants, Evolution, and Recommendations

Since their initial observations, Proofpoint has noted the evolution of these techniques. TA571, for instance, has diversified its lures, sometimes directing victims to use the Windows Run dialog for script execution instead of PowerShell terminals. Meanwhile, Clearlake has incorporated blockchain-based techniques like "EtherHiding" to host malicious scripts, adding a layer of obfuscation. These developments highlight the critical importance of user education and better cybersecurity measures within organizations. Employees must be trained to recognize suspicious messages and actions that prompt the execution of PowerShell scripts from unknown sources. Organizations should also deploy advanced threat detection and blocking mechanisms capable of identifying malicious activities embedded within seemingly legitimate web pages or email attachments. While the TA571 and ClearFake campaigns represent distinct threat actors with varying objectives, their utilization of advanced social engineering and PowerShell exploitation techniques demands heightened vigilance from organizations worldwide. By staying informed and implementing better cybersecurity practices, businesses can better defend against these online threats.

CISA & EAC Release Guide to Enhance Election Security Through Public Communication

Election Security

In a joint effort to enhance election security and public confidence, the Cybersecurity and Infrastructure Security Agency (CISA) and the U.S. Election Assistance Commission (EAC) have released a comprehensive guide titled “Enhancing Election Security Through Public Communications.” This guide on election security is designed for state, local, tribal, and territorial election officials who play a critical role as the primary sources of official election information.

Why Communication is Important in Election Security

Open and transparent communication with the American public is essential to maintaining trust in the electoral process. State and local election officials are on the front lines, engaging with the public and the media on numerous election-related topics. These range from election dates and deadlines to voter registration, candidate filings, voting locations, election worker recruitment, security measures, and the publication of results. The new guide aims to provide these officials with a strong framework and practical tools to develop and implement an effective, year-round communications plan. “The ability for election officials to be transparent about the elections process and communicate quickly and effectively with the American people is crucial for building and maintaining their trust in the security and integrity of our elections process,” stated CISA Senior Advisor Cait Conley. The election security guide offers practical advice on how to tailor communication plans to the specific needs and resources of different jurisdictions. It includes worksheets to help officials develop core components of their communication strategies. This approach recognizes the diverse nature of election administration across the United States, where varying local contexts require customized solutions. EAC Chairman Ben Hovland, Vice Chair Donald Palmer, Commissioner Thomas Hicks, and Commissioner Christy McCormick collectively emphasized the critical role of election officials as trusted sources of information. “This resource supports election officials to successfully deliver accurate communication to voters with the critical information they need before and after Election Day,” they said. Effective and transparent communication not only aids voters in casting their ballots but also helps instill confidence in the security and accuracy of the election results.

How Tailored Communication Enhances Election Security

The release of this guide on election security comes at a crucial time when trust in the electoral process is increasingly under scrutiny. In recent years, the rise of misinformation and cyber threats has posed significant challenges to the integrity of elections worldwide. By equipping election officials with the tools to communicate effectively and transparently, CISA and the EAC are taking proactive steps to safeguard the democratic process. One of the strengths of this guide is its emphasis on tailoring communication strategies to the unique needs of different jurisdictions. This is a pragmatic approach that acknowledges the diverse landscape of election administration in the U.S. It recognizes that a one-size-fits-all solution is not feasible and that local context matters significantly in how information is disseminated and received. Furthermore, the guide’s focus on year-round communication is a noteworthy aspect. Election security is not just a concern during election cycles but is a continuous process that requires ongoing vigilance and engagement with the public. By encouraging a year-round communication plan, the guide promotes sustained efforts to build and maintain public trust. However, while the guide is a step in the right direction, its effectiveness will largely depend on the implementation by election officials at all levels. Adequate training and resources must be provided to ensure that officials can effectively utilize the tools and strategies outlined in the guide. Additionally, there needs to be a concerted effort to address potential barriers to effective communication, such as limited funding or technological challenges in certain jurisdictions.

To Wrap UP

The “Enhancing Election Security Through Public Communications” guide by CISA and the EAC is a timely and necessary resource for election officials across the United States. As election officials begin to implement the strategies outlined in the guide, it is imperative that they receive the support and resources needed to overcome any challenges. Ultimately, the success of this initiative will hinge on the ability of election officials to engage with the public in a clear, accurate, and transparent manner, thereby reinforcing the security and integrity of the election process.
❌
❌