Normal view

There are new articles available, click to refresh the page.
Today — 18 June 2024Main stream

Feeding the Phishes

18 June 2024 at 09:21

PHISHING SCHOOL

Bypassing Phishing Link Filters

You could have a solid pretext that slips right by your target secure email gateway (SEG); however, if your link looks too sketchy (or, you know, “smells phishy”), your phish could go belly-up before it even gets a bite. That’s why I tend to think of link filters as their own separate control. Let’s talk briefly about how these link filters work and then explore some ways we might be able to bypass them.

What the Filter? (WTF)

Over the past few years, I’ve noticed a growing interest in detecting phishing based on the links themselves–or, at least, there are several very popular SEGs that place a very high weight on the presence of a link in an email. I’ve seen so much of this that I have made this type of detection one of my first troubleshooting steps when a SEG blocks me. I’ll simply remove all links from an email and check if the message content gets through.

In at least one case, I encountered a SEG that blocked ANY email that contained a link to ANY unrecognized domain, no matter what the wording or subject line said. In this case, I believe my client was curating a list of allowed domains and instructed the SEG to block everything else. It’s an extreme measure, but I think it is a very valid concern. Emails with links are inherently riskier than emails that do not contain links; therefore, most modern SEGs will increase the SPAM score of any message that contains a link and often will apply additional scrutiny to the links themselves.

How Link Filters Work — Finding the Links

If a SEG filters links in an email, it will first need to detect/parse each link in the content. To do this, almost any experienced software engineer will directly go to using Regular Expressions (“regex” for short):

Stand back! I know regular expressions

To which, any other experienced software engineer will be quick to remind us that while regex is extremely powerful, it is also easy to screw up:

99 problems (and regex is one)

As an example, here are just a few of the top regex filters I found for parsing links on stackoverflow:

(http|ftp|https):\/\/([\w_-]+(?:(?:\.[\w_-]+)+))([\w.,@?^=%&:\/~+#-]*[\w@?^=%&\/~+#-])

(?:(?:https?|ftp|file):\/\/|www\.|ftp\.)(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[-A-Z0–9+&@#\/%=~_|$?!:,.])*(?:\([-A-Z0–9+&@#\/%=~_|$?!:,.]*\)|[A-Z0–9+&@#\/%=~_|$])

(?:(?:https?|ftp):\/\/)?[\w/\-?=%.]+\.[\w/\-&?=%.]+

([\w+]+\:\/\/)?([\w\d-]+\.)*[\w-]+[\.\:]\w+([\/\?\=\&\#\.]?[\w-]+)*\/?

(?i)\b((?:[a-z][\w-]+:(?:/{1,3}|[a-z0–9%])|www\d{0,3}[.]|[a-z0–9.\-]+[.][a-z]{2,4}/)(?:[^\s()<>]+|\(([^\s()<>]+|(\([^\s()<>]+\)))*\))+(?:\(([^\s()<>]+|(\([^\s()<>]+\)))*\)|[^\s`!()\[\]{};:’”.,<>?«»“”‘’]))

Don’t worry if you don’t know what any of these mean. I consider myself to be well versed in regex and even I have no opinion on which of these options would be better than the others. However, there are a couple things I would like to note from these examples:

  1. There is no “right” answer; URLs can be very complex
  2. Most (but not all) are looking for strings that start with “http” or something similar

These are also some of the most popular (think “smart people”) answers to this problem of parsing links. I could also imagine that some software engineers would take a more naive approach of searching for all anchor (“<a>”) HTML tags or looking for “href=” to indicate the start of a link. No matter which solution the software engineer chooses, there are likely going to be at least some valid URLs that their parser doesn’t catch and might leave room for a categorical bypass. We might also be able to evade parsers if we can avoid the common indicators like “http” or break up our link into multiple sections.

Side Note: Did you see that some of these popular URL parsers account for FTP and some don’t? Did you know that most browsers can connect to FTP shares? Have you ever tried to deliver a phishing payload over an anonymous FTP link?

How Link Filters Work — Filtering the Links

Once a SEG has parsed out all the links in an email, how should it determine which ones look legitimate and which ones don’t? Most SEGs these days look at two major factors for each link:

  1. The reputation of the domain
  2. How the link “looks”

Checking the domain reputation is pretty straightforward; you just split the link to see what’s between the first two forward slashes (“//”) and the next forward slash (“/”) and look up the resulting domain or subdomain on Virustotal or similar. Many SEGs will share intelligence on known bad domains with other security products and vice versa. If your domain has been flagged as malicious, the SEG will either block the email or remove the link.

As far as checking how the link “looks”, most SEGs these days use artificial intelligence or machine learning (i.e., AI/ML) to categorize links as malicious or benign. These AI models have been trained on a large volume of known-bad links and can detect themes and patterns commonly used by SPAM authors. As phishers, I think it’s important for us to focus on the “known-bad” part of that statement.

I’ve seen one researcher’s talk who claimed their AI model was able to detect over 98% of malicious links from their training data. At first glance, this seems like a very impressive number; however, we need to keep in mind that in order to have a training set of malicious links in the first place, humans had to detect 100% of the training set as malicious. Therefore, the AI model was only 98% as good as a human at detecting phishing links solely on the “look” of the link. I would imagine that it would do much worse on a set of unknown-bad links, if there was a way to hypothetically attain such a set. To slip through the cracks, we should aim to put our links in that unknown-bad category.

Even though we are up against AI models, I like to remind myself that these models can only be trained on human-curated data and therefore can only theoretically approach the competence of a human, but not surpass humans at this task. If we can make our links look convincing enough for a human, the AI should not give us any trouble.

Bypassing Link Filters

Given what we now know about how link filters work, we should have two main tactics available to us for bypassing the filter:

  1. Format our link so that it slips through the link parsing phase
  2. Make our link “look” more legitimate

If the parser doesn’t register our link as a link, then it can’t apply any additional scrutiny to the location. If we can make our link location look like some legitimate link, then even if we can’t bypass the parser, we might get the green light anyway. Please note that these approaches are not mutually exclusive and you might have greater success mixing techniques.

Bypassing the Parser

Don’t use an anchor tag

One of the most basic parser bypasses I have found for some SEGs is to simply leave the link URL in plaintext by removing the hyperlink in Outlook. Normally, link URLs are placed in the “hypertext reference” (href) attribute of an HTML anchor tag (<a>). As I mentioned earlier, one naive but surprisingly common solution for parsing links is to use an HTML parsing library like BeautifulSoup in Python. For example:

soup = BeautifulSoup(email.content, 'html.parser')
links = soup.find_all("a") # Find all elements with the tag <a>
for link in links:
print("Link:", link.get("href"), "Text:", link.string)

Any SEG that uses this approach to parse links won’t see a URL outside of an anchor tag. While a URL that is not a clickable link might look a little odd to the end user, it’s generally worth the tradeoff when this bypass works. In many cases, mail clients will parse and display URLs as hyperlinks even if they are not in an anchor tag; therefore, there is usually little to no downside of using this technique.

Use a Base Tag (a.k.a BaseStriker Attack)

One interesting method of bypassing some link filters is to use a little-known HTML tag called “base”. This tag allows you to set the base domain for any links that use relative references (i.e., links with hrefs that start with “/something” instead of direct references like “https://example.com/something”). In this case, the “https://example.com” would be considered the “base” of the URL. By defining the base using the HTML base tag in the header of the HTML content, you can then use just relative references in the body of the message. While HTML headers frequently contain URLs for things like CSS or XML schemas, the header is usually not expected to contain anything malicious and may be overlooked by a link parser. This technique is known as the “BaseStriker” attack and has been known to work against some very popular SEGs:

https://www.cyberdefensemagazine.com/basestriker-attack-technique-allow-to-bypass-microsoft-office-365-anti-phishing-filter/

The reason why this technique works is because you essentially break your link into two pieces: the domain is in the HTML headers, and the rest of the URL is in your anchor tags in the body. Because the hrefs for the anchor tags don’t start with “https://” they aren’t detected as links.

Scheming Little Bypasses

The first part of a URL, before the colon and forward slashes, is what’s known as the “scheme”:

URI = scheme ":" ["//" authority] path ["?" query] ["#" fragment]

As mentioned earlier, some of the more robust ways to detect URLs is by looking for anything that looks like a scheme (e.g. “http://”, or “https://”), followed by a sequence of characters that would be allowed in a URL. If we simply leave off the scheme, many link parsers will not be able to detect our URL, but it will still look like a URL to a human:

accounts.goooogle.com/login?id=34567

A human might easily be convinced to simply copy and paste this link into their browser for us. In addition, there are quite a few legitimate schemes that could open a program on our target user’s system and potentially slip through a URL parser that is only looking for web links:

https://en.wikipedia.org/wiki/List_of_URI_schemes

There are at least a few that could be very useful as phishing links ;)

QR Phishing

What if there isn’t a link in the email at all? What if it’s an image instead? You can use a tool like SquarePhish to automate phishing with QR codes instead of traditional links:

GitHub - secureworks/squarephish

I haven’t played with this yet, but have heard good things from friends that have used similar techniques. If you want to play with automating this attack yourself, NodeJS has a simple library for generating QRs:

qrcode

Bypassing the Filter

Don’t Mask

(Hold on. I need to get on my soapbox…) I can’t count how many times I’ve been blocked because of a masked link only to find that unmasking the link would get the same pretext through. I think this is because spammers have thoroughly abused this feature of anchor tags in the past and average email users seldom use masked links. Link filters tend to see masked links as far more dangerous than regular links; therefore, just use a regular link. It seems like everyone these days knows how to hover a link and check its real location anyway, so masked links are even bad at tricking humans now. Don’t be cliche. Don’t use masked links.

Use Categorized Domains

Many link filters block or remove links to domains that are either uncategorized, categorized as malicious, or were recently registered. Therefore, it’s generally a good idea to use domains that have been parked long enough to be categorized. We’ve already touched on this in “One Phish Two Phish, Red Teams Spew Phish”, so I’ll skip the process of getting a good domain; however, just know that the same rules apply here.

Use “Legitimate” Domains

If you don’t want to go through all the trouble of maintaining categorized domains for phishing links, there are some generally trustworthy domains you can leverage instead. One example I recently saw “in-the-wild” was a spammer using a sites.google.com link. They just hosted their phishing page on Google! I thought this was brilliant because I would expect most link filters to allow Google, and even most end users would think anything on google.com must be legit. Some other similar examples would be hosting your phishing sites as static pages on GitHub, in an S3 bucket, other common content delivery networks (CDNs), or on SharePoint, etc. There are tons of seemingly “legitimate” sites that allow users to host pages of arbitrary HTML content.

Arbitrary Redirects

Along the same lines as hosting your phishing site on a trusted domain is using trusted domains to redirect to your phishing site. One classic example of this would be link shorteners like TinyURL. While TinyURL has been abused for SPAM to the point that I would expect most SEGs to block TinyURL links, it does demonstrate the usefulness of arbitrary redirects.

A more useful form of arbitrary redirect for bypassing link filters are URLs with either cross-site scripting (XSS) vulnerabilities that allow us to specify a ‘window.location’ change or URLs that take an HTTP GET parameter specifying where the page should redirect to. As part of my reconnaissance phase, I like to spend at least a few minutes on the main website of my target to look for these types of vulnerabilities. These vulnerabilities are surprisingly common and while an arbitrary redirect might be considered a low-risk finding on a web application penetration test report, they can be extremely useful when combined with phishing. Your links will point to a URL on your target organization’s main website. It is extremely unlikely that a link filter or even a human will see the danger. In some cases, you may find that your target organization has configured an explicit allow list in the SEG for links that point to their domains.

Link to an Attachment

Did you know that links in an email can also point to an email attachment? Instead of providing a URL in the href of your anchor tag, you can specify the content identifier (CID) of the attachment (e.g. href=“cid:mycontentidentifier@content.net”). One way I have used this trick to bypass link filters is to link to an HTML attachment and use obfuscated JavaScript to redirect the user to the phishing site. Because our href does not look like a URL, most SEGs will think our link is benign. You could also link to a PDF, DOCX, or several other usually allowed file types that then contain the real phishing link. This might require a little more setup in your pretext to instruct the user, or just hope that they will click the link after opening the document. In this case, I think it makes the most sense to add any additional instructions inside the document where the contents are less likely to be scrutinized by the SEG’s content filter.

Pick Up The Phone

This blog rounds out our “message inbound” controls that we have to bypass for social engineering pretexts. It would not be complete without mentioning one of the simplest bypasses of them all:

Not using email!

If you pick up the phone and talk directly to your target, your pretext travels from your mouth, through the phone, then directly into their ear, and hits their brain without ever passing through a content or reputation filter.

Along the same lines, Zoom calls, Teams chats, LinkedIn messaging, and just about any other common business communication channel will likely be subject to far fewer controls than email. I’ve trained quite a few red teamers who prefer phone calls over emails because it greatly simplifies their workflow. Just a few awkward calls is usually all it takes to cede access to a target environment.

More interactive forms of communication, like phone calls, also allow you to gauge how the target is feeling about your pretext in real time. It’s usually obvious within seconds whether someone believes you and wants to help or if they think you’re full of it and it’s time to cut your losses, hang up the phone, and try someone else. You can also use phone calls as a way to prime a target for a follow-up email to add perceived legitimacy. Getting our message to the user is half the battle, and social engineering phone calls can be a powerful shortcut.

In Summary

If you need to bypass a link filter, either:

  1. Make your link look like it’s not a link
  2. Make your link look like a “legitimate” link

People still use links in emails all the time. You just need to blend in with the “real” ones and you can trick the filter. If you are really in a pinch, just call your targets instead. It feels more personal, but it gets the job done quickly.


Feeding the Phishes was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Feeding the Phishes appeared first on Security Boulevard.

Before yesterdayMain stream

Lateral Movement with the .NET Profiler

11 June 2024 at 12:35

Lateral Movement with the .NET Profiler

The accompanying code for this blogpost can be found HERE.

Intro

I spend a lot of my free time modding Unity games. Since Unity is written in C#, the games are very easy to work with compared to those that compile to unmanaged code. This makes it a perfect hobby project to pick up and set down without getting too sweaty.

As I got deeper into modding C# games, I realized that hooking functions is actually slightly more complicated than it is in unmanaged programs, which is counterintuitive because just about everything else is much, much easier.

For unmanaged code, hooking functions is relatively straightforward. The basic steps are:

  • Allocate some memory to house the code you want to run when a function is called and write your instructions there
  • Overwrite the beginning of the original function’s instructions to jump to your new code
  • Handle all the fiddly details necessary to ensure that the program’s execution gets back to the original function and that the stack isn’t sloppy joe meat by the end of it

With .NET and Mono, it isn’t as simple. .NET assemblies’ functions are made up of a binary instruction set known as the Common Intermediate Language (CIL) that gets just-in-time (JIT) compiled to machine instructions at runtime by the Common Language Runtime (CLR).

The main issue with attempting to hook managed code is that by the time you inject into the process you want to hook a function in, the target function may have already been JIT’ed and if so, the CLR the has cached the x86 instructions that it gets translated into. If you modify the CIL bytecode and the function gets called again, the CLR may just execute the cached x86 instructions that were compiled before your modifications. In addition to this issue, there are myriad little corner cases that make this a big headache. (Although in reality, this is a solved problem. There have been many great solutions and frameworks built to make this easy for developers, such as Harmony and MonoMod, but I was still curious to learn more about it.)

.NET Profilers

In my googling for hooking solutions, I came across Microsoft’s .NET profiling API, which wasn’t that helpful for my modding needs, but it did seem to have some handy primitives for red teaming! It is designed to allow for instrumentation of .NET processes by implementing a callback interface in an unmanaged COM server DLL that gets loaded into a given .NET process.

The CLR then calls functions from the interface when different events happen during execution. For pretty much anything that goes on in the CLR, you can implement a callback that gets called to inspect and manipulate behavior at runtime, such as when assemblies and modules are loaded, when functions are JIT compiled, and much more. Just look at all these callbacks, and this isn’t even all of them!

.NET Profiling API Callbacks

For more information about the basics of how these profilers work, I recommend watching this talk by Pavel Yosifovich. It was far and away the most valuable resource I found:

https://medium.com/media/78a9180af2b2ea4398572c48ce6ffdd6/href

The Offensive Value of the .NET Profiler

Execution and Persistence:

Upon execution, the CLR for a given process examines the environment variables for three specific variables that cause a profiling DLL to be loaded:

Profiler-Specific Environment Variables
  • COR_ENABLE_PROFILING — This is a flag that enables profiling for the given process if set to 1, meaning the profiler DLL will be loaded into the process
  • COR_PROFILER — This is the CLSID that will be handed to the profiler DLL to see if it is the correct COM server. The profiler DLL can choose not to check this though and load no matter what this CLSID is
  • COR_PROFILER_PATH — The path to the profiler DLL that will be loaded

If all of these variables are present, profiling is enabled, and the DLL exists on disk, the profiler DLL will be loaded into the process at the start of execution. This gives us a pretty nice code execution primitive to load a DLL into an arbitrary .NET process.

This has been documented for some time and observed used by threat actors in the wild. I ran across this blog by Bohops detailing other interesting abuses of the .NET profiling infrastructure in Windows. It references a blog by Casey Smith from 2017 detailing loading a DLL this way, and MITRE has a technique for this as well as some in-the-wild examples.

Since environment variables can be set system-wide, this means that this also works as form of persistence. Whenever a .NET process executes, it will load the specified DLL.

The minimum viable profiler that can abuse this is a “fake” COM server DLL that exports the function DllGetClassObject, which is the function that is used to check the CLSID of the COM server DLL. As stated above though, there’s no need to actually implement the logic of the check here, and arbitrary code can be executed instead:

Minimum viable “fake” profiler
Execution of the “fake” profiler in a .NET process

Lateral Movement:

I was talking to Lee Chagolla-Christensen (@tifkin) about ways to set these environment variables on a remote computer to load a DLL via UNC paths, and he let me know that the Win32_ProcessStartup WMI class allows for environment variables to be set for a specific process, meaning that this could be abused with a Win32_Process Create call to execute a .NET process remotely and load a .NET profiler DLL! Thanks Lee!

So I set about creating a BOF and Payload to allow this to be used more easily. There results are HERE.

I modified Yaxser’s WMI Lateral Movement BOF to include a Win32_ProcessStartup class with the appropriate environment variables defined and a user-defined DLL path to enable lateral movement via the .NET profiler.

Adding environment variables to enable WMI lateral movement

Additionally, I modified Pavel Yosifovich’s example .NET profiler to be a better payload. I utilized this tutorial from ired.team to store a shellcode payload as a resource that can be hot swapped, and I used the function ICorProfilerInfo2::SetEnterLeaveFunctionHooks2 to set an enter hook on all JITed functions. The hook will load and execute the shellcode from the resource, essentially performing process hollowing, because normal functionality of the hooked function will cease indefinitely if the payload is something like a Cobalt Strike beacon.

Setting the hooks during initialization
Adding the shellcode execution to the function enter hook

In that same file, CoreProfiler.cpp, you can see all the other handy callbacks that could be used for execution primitives or even more interesting use cases. A neat example that uses the .NET profiler for evasion is Omer Yair’s InvisiShell, which monitors assembly loads in PowerShell processes to then patch out functions to disable AMSI. I believe there’s a lot of fertile ground here for further research regarding all the callbacks exposed by the CLR.

Putting it all together

When we use the payload and BOF together, you get lateral movement that looks something like this:

.NET Profiler BOF execution

You might be thinking: “hey Dan! Didn’t you say earlier you wanted to load the payload from a UNC path?” Yes I did, good memory. Sadly since we are using WMI we run into the double hop problem, meaning the process we execute on the remote machine can’t authenticate to a remote file share to pull our payload via a UNC path. That’s ok though, because the DLL can instead be loaded from a WebDAV server:

.NET Profiler BOF execution featuring WebDAV

You can even set it up to be through the beacon you are executing the BOF with by utilizing wsgidav. First execute this command to host your server locally on your workstation:

wsgidav --host=0.0.0.0 --port=80 --root=/payload/folder --auth=anonymous

and then start a reverse portforward on your beacon:

rportfwd 80 localhost 80

Now you can execute a .NET process, have your payload pulled over automatically, and executed.

Conclusion

I found this “feature” of the .NET Profiler to be pretty neat albeit a little unwieldy. You’ll see that the payload is purely for demonstration purposes. No attempts to make it evasive have been made so defender may eat it upon being built. Sorry!

I hope this gets folks more curious about all the cool stuff you can do with the .NET profiler though, and I’m sure there are other ways out there to remotely set environment variables to make it even more useful. I briefly looked into setx and other means of setting them via remote registry, but it seemed like the changes didn’t take effect until after a reboot. I bet there is some way to make it work though!

Many thanks again to Lee, Pavel, Yaxser, and Mantvydas for all the prior research, since the payload and BOF is really just a collage of your work.


Lateral Movement with the .NET Profiler was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Lateral Movement with the .NET Profiler appeared first on Security Boulevard.

Ghostwriter v4.2

10 June 2024 at 16:01

Ghostwriter v4.2: Project Documents & Reporting Enhancements

After April’s massive Ghostwriter v4.1 release, we received some great feedback and ideas. We got a little carried away working on these and created a release so big we had to call it v4.2. This release contains some fantastic changes and additions to the features introduced in April’s release. Let’s get to the highlights!

Improving Customizable Fields

Ghostwriter v4.1 introduced custom fields, and seeing the community use them so creatively was awesome. What we saw gave us some ideas for a few big improvements.

The rich text fields support the Jinja2 templating language, so loops quickly became a talking point. Looping over project objectives, findings, hosts, and other things to create dynamic lists, table rows, or sections is incredibly powerful, so we had to do it.

You can now use Jinja2-style expressions with the new li, tr, and p tags to create list items, table rows, and text sections. Here is an example of building a dynamic list inside Ghostwriter’s WYSIWYG editor.

Jinja2-style Loop in the WYSIWYG Editor

This screenshot includes examples of a few notable features. We’re using the new li tag with a for loop to create a bulleted list of findings. We have access to Jinja2 filters, including Ghostwriter’s custom filters, so we use the filter_severity filter to limit the loop to findings with a severity rating matching critical, high, or medium. The first and last bullets won’t be in the list in the final Word document.

The middle bullet will repeat for each finding to create our list. It includes the title and severity and uses the regex_search filter to pull the first sentence of the finding’s description. The use of severity_rt here is also worth a call-out. Some community members asked about nesting rich text fields inside of other rich text fields, like the pre-formatted severity_rt text for a finding. Not only can we use severity_rt inside this field, but we can also add formatting, like changing the text to bold.

Here is the above list rendered in a Word document. The pre-formatted finding.severity_rt appears with the proper color formatting and the bold formatting added in the WYSIWYG editor.

Output of the Jinja2 Loop in Microsoft Word

These additions introduced some complexity. A typo or syntax mistake could break the Jinja2 templating and prevent a report from generating, and the resulting error message wasn’t always helpful for tracking down the offending line of text. To help with this, Ghostwriter v4.2 validates your Jinja2 when you save your work, so you don’t need to worry about accidentally saving any invalid Jinja2 and causing a headache later on.

Validating the Jinja2 didn’t cover every possible issue, though. Even valid Jinja2 can fail to render if the report data causes an error (e.g., trying to divide by zero or referencing a non-existent value). To help with that, we significantly improved error handling to catch Jinja2 syntax and template errors at render time and surface more helpful information about them. When content prevents a report from generating, error messages point to the report’s section containing the problem.

In this example, a field called exec_summ contains this Jinja2 statement: {{ 100/0 }}. That’s valid Jinja2, but you can’t actually divide by zero, so rendering will always fail. With Ghostwriter v4.2’s error handling enhancements, the error message will direct us to the field name and include information we can use to track down the problem.

New Explicit Error Message for a Failed Report

More Reporting Improvements

Ghostwriter v4.2 also includes enhancements to the reporting engine. The most significant change is the introduction of project documents. You don’t always need to generate a full report with findings. You may need to generate other project-related documents before or after the assessment is done, like rules of engagement, statement of work, or attestation of testing documents. You could technically do this with older versions of Ghostwriter by generating a report with a different template, but that was not intuitive.

The project dashboard’s Reports tab is now called Reporting and contains an option to generate JSON, docx, or pptx reports with your project’s data. This works like the reports you’re familiar with but uses only the project data (e.g., no findings or report-specific values).

New Project Document Generation Under the Project Dashboard

We wanted to make it easier to generate project documents like this because custom fields open up many possibilities. For example, at SpecterOps, we provide clients with daily status reports after each day of testing. We can accomplish this task with a Daily Status Report field added to projects and the new project reporting feature. Consultants can now easily collaborate on updates to the daily status and then generate their daily status document from the same dashboard.

That’s not all, though. We realized one globally configured report filename would never apply to every possible document or report you might want to generate, so we made some changes. The biggest change is that your filename can now be configured using Jinja2-style templating with access to your project data, familiar filters, and a few additional values (e.g., `now` to include the current date and time). That means you can configure a filename like this that dynamically includes information about your project, client, and more:

{{now|format_datetime(“Y-m-d”)}} {{company_name}} — {{client.name}} {{project.project_type}} Report

That template would produce a filename like 2024–06–05 Acme Corp — SpecterOps Red Team Report.docx. That’s great for a default, but you’d probably be renaming documents often as you expanded the types of documents generated with Ghostwriter. To help with that, we made it possible to configure separate global defaults for reports and project documents.

Jinja2 Templating for Default Filenames for Report Downloads

We also added the option to override the filename per template! To revisit the daily status report example, your Word template for such a report can override the global filename template with a filename template like {{now|format_datetime(“Y-m-d”)}} {{company_name}} Daily Status Report.

Another enhancement for this feature is a third template type, Project DOCX. You can use this to flag Microsoft Word templates that are intended only for project documents. Templates flagged with this type will only appear in the dropdown menu on the project dashboard. This will keep them out of the list of templates for your reports.

Finally, Ghostwriter v4.2 also includes a feature proposed by community member Dominic Whewell, support for templating the document properties in your Word templates. If you set values like the document title, author(s), and company, you can now do that in your templates and let Ghostwriter fill in that information for you.

Jinja2 Templating in Document Properties

Background Tasks with Cron

Ghostwriter v4.2 also contains various bug fixes and quality-of-life improvements. There are too many details here, but one worth mentioning is support for cron expressions with scheduled background tasks. Previously, you could schedule tasks to run at certain time intervals, but you couldn’t control some of the finer points, like the day of the week.

You can now use cron to schedule a task on a more refined schedule, like only on a specific day of the week or weekdays. When scheduling a task, select Cron for the Schedule Type and provide a cron expression–e.g., 00 12 * * 1–5 to run the task every weekday at 12:00.

Using Cron to Schedule a Background Task

Check out the full CHANGELOG for all the details on this release.

Release Ghostwriter v4.2.0 · GhostManager/Ghostwriter


Ghostwriter v4.2 was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post Ghostwriter v4.2 appeared first on Security Boulevard.

One Phish Two Phish, Red Teams Spew Phish

4 June 2024 at 13:52

PHISHING SCHOOL

How to Give your Phishing Domains a Reputation Boost

“Armed with the foreknowledge of my own death, I knew the giant couldn’t kill me. All the same, I preferred to keep my bones unbroken” — Big Phish

When we send out our phishing emails, we are reckoning with giants. Spamhaus, SpamAssassin, SpamTitan, Barracuda, and many more giants wish to grind your bones to bake their bread. They are big. They are scary. But they don’t catch everything. Just like Edward Bloom learned; the best way to deal with giants is to make a good first impression.

What’s your name, giant? — Karl

That’s how we will deal with these giants as well. Let’s talk about proper etiquette when speaking with giants.

“He Ate My Dog”

The first opportunity a mail server has to reject your email is right at the beginning of an SMTP exchange. The server can run checks on the reputation of both your sending IP, and the domain you are claiming to be in your FROM address. In general, the main controls that will get you blocked by a good secure email gateway (SEG) are some version of the following:

  • Does your sending IP show up on a known SPAM list like Spamhaus?
  • Is the geographical location of the sending IP in another country?
  • What’s the reputation/category of the sending domain?
  • What’s the age of the sending domain?
  • Does SPF fail?
He ate my dog

Of course, all of these checks are generally configurable, and not every SEG will perform all of these checks. These are just some of the most common controls I tend to expect are the root cause of a failure at this stage in the phish’s lifecycle. None of these is really all that difficult to address/bypass so by taking them into account, we can usually ensure our phishing campaigns are not blocked at this layer.

Bypassing Sender Reputation Checks

When we attempt to deliver our message, we will be connecting to a mail server for the target email’s domain, telling it who we are and who we want to send a message to, and then giving it the contents of the message. Here is an example of what that conversation looks like under the hood with data we control in blue and server responses in green:

SMTP Greetings

In general, IP address blocks are easy for us to troubleshoot because if our sending IP is blocked, the server will typically just terminate the TCP connection and we won’t even be able to start this SMTP call and response. It’s pretty rare to be blocked by IP right off the bat, and typically means you’re sending from an IP that is associated with sending a lot of SPAM, or the GeoIP result for your sending IP is in a blocked country or region. If that’s the case, just try sending from another IP.

The next opportunity for the server to block us is by the reputation of the domain we supply as the sending domain. Notice in our example that the sending domain is ‘contoso.com’ and is used in both the EHLO and MAIL FROM commands. These domains typically match, but they don’t have to. Feel free to tinker with mismatching these values to see how different mail servers respond.

You can think of the EHLO and MAIL FROM commands as saying “I’m a mail server for contoso.com, and I’m going to send an email from chris@contoso.com to one of your users”. At this point, the receiving server has an opportunity to accept or reject our claim by checking a few things:

  • Is contoso.com on a known SPAM list? (never going to let it through)
  • Is contoso.com explicitly whitelisted? (always going to let it through)
  • How old is the contoso.com domain?
  • Is contoso.com categorized? And if so, what category is it?
  • Has the server ever received emails from contoso.com before?
  • Does contoso.com’s MX record match your sending IP address?
  • Does a reverse DNS lookup of your sending IP resolve to <something>.contoso.com?
  • Does contoso.com publish an SPF record that includes your sending IP address?

Most mail servers will perform one or more of these checks to obtain hints about whether you can be trusted or not. Some of these hints, like reverse DNS records, will increase our reputation if they pass the check, but won’t really hurt our reputation too much if they fail. Others, like trying to spoof a domain from an IP that is not in that domain’s SPF record, will likely get us burned right away. Therefore, we should try to pass as many of these checks as we can, while avoiding techniques that could be clearly identified as a spoof.

So, Which Domain Should I Use?

When choosing our sending domain, we have a few options, each with its own pros and cons. We can either spoof a domain that we do not own, or buy a domain. If we buy a domain, we can either get a brand new domain of our choosing, or buy an expired domain that was previously bought by someone else. No one method is inherently better than the others, but we should have a clear understanding of how to get the most out of each technique when making our choice.

Spoofed Domains

Pros

  • Free! ‘Nough said
  • Makes attribution difficult. You are basically framing someone else. (Note: this could be a con if you have to go through attribution and deconfliction with the blue team)
  • Probably already categorized and old enough to pass checks for newly registered domains
  • Successful spoofs can look very legitimate, and therefore boost success rates

Cons

  • Limited to existing domains that you can find
  • You may be able to use non-existent domains, but they are easy to identify and often blocked. It’s generally better to purchase in this case.
  • Most high-value domains have implemented email security settings like SPF that could make it difficult, if not impossible to spoof without detection
  • We can’t set up our own email security settings on the domain that might boost our reputation (no control and no proof-of-ownership)
  • We have no influence over domain category or reputation

Considerations When Spoofing Domains

Does our target mail server actually check SPF?

If not, we should be able to successfully spoof just about any domain. I’ve seen several cases where, for whatever reason, SPF checks were either turned off or misconfigured so that failing SPF did not result in rejected emails. It’s often worth a shot to try to deliver a few benign emails while intentionally failing SPF to see if the server sends back an error message or says it delivered them. While you won’t receive any bounce messages or responses from recipients (those would go to the mail server for your spoofed domain), you could try putting image links or tags for remote CSS resources in the emails that point to a web server you own and track requests for these resources as potential indicators that the emails were delivered.

Is there a similar registered domain that lacks an SPF record?

You may be able to find domains that have been registered that look very similar to your target organization’s domain, but have not been configured with any email security settings that would allow a mail server to confirm or deny a spoof attempt. In some cases, I’ve seen clients that buy up domains similar to their own to prevent typosquatting, but then forget to apply email settings to prevent spoofing. In other cases, I’ve seen opportunistic domain squatters who buy look-alike domains in case they may become valuable, and don’t bother to set up any DNS records for their domains. In either case, we can potentially spoof these domains and the mail server will have no way of knowing we don’t own them. While this might mean we are starting off with a generally low trust score, most mail servers will not outright reject messages just because the sending domain lacks a properly configured SPF record. That is because there are so many legitimate organizations that have never bothered to set one up.

Does your target organization have an expired domain listed in an ‘include’ statement in their SPF record?

Let’s say our target organization’s email domain is fabrikam.com, and we would like to spoof an internal email address (bigboss@fabrikam.com) while sending to our target users. Consider that fabrikam.com has the following SPF record as an example:

v=spf1 mx a ip4:12.34.45.56/28 include:sendgrid.net include:imexpired.net -all

This SPF record lists the hosts/IPs that are allowed to send emails from frabrikam.com. Here is a breakdown of what each piece means:

v=spf1

This is an SPF version 1 TXT record. Used to distinguish SPF from other TXT records.

mx

The IP of the domain’s MX (mail server) record is trusted to send emails from fabrikam.com

a

the IP from doing a forward DNS lookup on the domain itself is also trusted

ip4:12.34.45.56/28

Any IP in this small range is trusted to send emails from fabrikam.com

include:sendgrid.net

any IPs or IP ranges listed in sendgrid.net’s SPF record are also trusted

include:imexpired.net

any IPs or IP ranges listed in imexpired.net’s SPF record are also trusted

-all

every other sending IP that was not listed in the previous directives is explicitly denied from sending emails claiming to be from fabrikam.com

As this demonstration shows, the ‘include’ directive in SPF allows you to trust other domains to send on your domain’s behalf. For example, emailing services like sendgrid can be configured to pass SPF checks so that sendgrid can deliver marketing emails from fabrikam.com to their customers. In the case of our example, if our client previously trusted imexpired.net to send emails, but the domain is now expired, it means we buy imexpired.net and add our own server to imexpired.net’s SPF record. We will then be able to pass SPF checks for fabrikam.com and spoof internal users.

Can we bypass SPF protections with other technical tricks?

Depending on the receiving mail server’s configuration, there may be other ways we can spoof domains that we do not own. For example, some email servers do not require SPF alignment. This means that we can have a mismatch between the domain we specify in the ‘MAIL FROM’ SMTP command, and the ‘From’ field in the message content itself. In these cases, we can use either a domain we own and can pass SPF for, or any domain without an SPF record in the ‘SMTP FROM’ command and then choose any other domain for the email address that our target user sees in their mail client. Another trick we can sometimes use is to supply what’s known as the “null sender”. To do this, we simply specify “<>” as the sender. The null sender is used for bounce messages and is sometimes allowed by spam filters for troubleshooting.

SMTP is a purely text-based protocol. This means that there is often ambiguity in determining the beginning and end of each section of data and SMTP implementations often have parsing flaws that can be abused to bypass security features like SPF and DKIM. In 2020, Chen Jianjun et. al published a whitepaper demonstrating 18 separate such flaws they discovered and that affected even some big email providers like Google:

https://i.blackhat.com/USA-20/Thursday/us-20-Chen-You-Have-No-Idea-Who-Sent-That-Email-18-Attacks-On-Email-Sender-Authentication.pdf

https://www.usenix.org/system/files/sec20fall_chen-jianjun_prepub_0.pdf

They also published the tool they used to discover these vulnerabilities, as well as their research methodology. While they disclosed these vulnerabilities to the vendors they tested, they did not test every major SEG for all of these vulnerabilities, and you might find that some popular SEGs are still vulnerable ;)

In 2023, Marcello Salvati demonstrated a flaw in a popular email sending service that allowed anyone to pass SPF for millions of legitimate domains.

https://www.youtube.com/watch?v=NwnT15q_PS8

His DefCon talk on the subject demonstrates his research and discovery process as well. I know that there are similar flaws in some other major email sending services, but I will not be throwing them under the bus in this post. Happy hunting!

Purchasing Brand New Domains

Pros

  • We can set up SPF, DMARC, DKIM to ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More granular creative control over the name

Cons

  • May take some time to build a reputation before sending a campaign

Purchasing Expired Domains

Pros

  • We can set up SPF, DMARC, DKIM and ensure we pass these checks
  • We control the MX record so we can receive mail at this domain
  • More likely to already be categorized
  • You may have wayback machine results to help keep the current category
  • You might salvage some freebee social media accounts

Cons

  • Little control over the domain name. You can get key words but otherwise just have to get lucky on which ones become available

Considerations for any Purchased Domain

The two major advantages to using domains we own for phishing is that we can specify the mail server and mail security settings for the domain. This means that we can actually have back-and-forth conversations with phishing targets if we need to. This opens up the possibility for more individualized pretexts. By setting up our own SPF, DMARC, and DKIM records, we can ensure that we pass these checks to give our emails a high probability of passing reputation based checks.

The third benefit of owning a domain is that we can control more factors that might help us categorize the domain. Email gateways, web proxies, and DNS security appliances will often crawl new domains to determine what kind of content is posted on the domain’s main website if one exists. They then make a determination of whether to block or allow requests for that domain based on the type of site it is. Security products that include category checks like this are often configurable. For example, some organizations block social media sites while others allow employees to use social media at work. In general, most network administrators will allow certain common benign categories. A couple classic desirable categories for red teamers are ‘healthcare’ and ‘finance’. These categories have additional benefits when used for command and control (C2) traffic, though for the purposes of delivering emails, we really just need to be categorized as anything that’s not sketchy. If your domain is not categorized at all, or in a ‘bad’ category, then your campaign will be dead in the water, so here’s some tips on getting off the naughty list.

Categorizing New Domains

.US.ORG Domains

One of my favorite shortcuts for domain categorization is to register domains off the ‘us.org’ or similar top levels. Because ‘us.org’ is a domain registrar, its Bluecoat category is ‘Web Hosting’, and has been around for many years. The minute you buy one of these domains, you are automatically going to be lumped in a generally benign and frequently allowed category.

Categorize with HumbleChameleon

While Humble Chameleon (HC) was designed to bypass multi-factor authentication on phishing assessments, it also has some great features for ‘hiding behind’ benign domains. You can set HC’s ‘primary_target’ option to point at a site you would like to mimic and let the transparent proxy do the rest.

Warning: any time your domain is crawled, your server will be forwarding traffic to the impersonated site. I have seen logs on my server before where someone was scanning my site for WordPress vulnerabilities, and therefore causing my server to send the same malicious traffic to the site I was impersonating. While the chances of being sued for this are pretty low, please be aware that this technique may put you in a bit of a gray area. Choose targets wisely and keep good logs.

Categorizing with Clones and Nginx/Apache

If you’ve never used httrack to clone a site, you should really do yourself a favor and explore this absolute classic! How classic you ask? Xavier Roche first released this website cloner back in 1998 and it’s still a great utility for quickly cloning a site. Httrack saves website resources in a directory structure that is ideal for hosting with a static Nginx or Apache web server. The whole setup process can be easily scripted with Ansible to stand up clones for domain categorization.

Categorizing with Chatgpt and S3

The more modern version of static site hosting is to ask an AI bot to write your website for you, and then serve the static content using cheap cloud services like S3 buckets and content delivery networks. If you get stuck, or don’t know how to automate this process, you can just ask the robot ;)

Categorizing Expired Domains

Whenever you purchase an expired domain, you should go ahead and check its category before you buy. It’s much easier to maintain a good category than to try to re-categorize a domain, so only buy it if it’s already in a benign category. Once you buy one, you have two options to maintain the category. You can either generate some new content or re-host the site’s old content. If you want to generate new content, the AI route is probably the most efficient. However, my preferred approach is to just put the content of the old site back up on the Internet. If that content was what got the original categorization, then it should also keep it in the same category.

Categorizing with Wayback

If the expired domain you just purchased was indexed by the Wayback Machine, then you can download the old pages with a simple Ruby script:

https://github.com/hartator/wayback-machine-downloader

This script is insanely easy to install and use, and will generally get you a nice static site clone within minutes when paired with Nginx, Apache, etc. Keep in mind that you may want to manually modify site content that might still be under copyright or trademark from the prior owner.

Social Media Account Bonus

For expired sites you’ve bought, have you ever set up a mail server and monitored for incoming emails? I have! And the results can be fascinating. What you might find is that the domain you just bought used to belong to a functioning company, and some of that company’s employees used their work email to sign up for services like social media sites. Now, the company went under, those employees don’t have access to their former work email, but you do. Once again, I need to stress that this is a legal gray area, so do what you can to give these people back their accounts, and change PII on any others you decide to keep as catphish accounts.

General Tips for any Domain

Send Some ‘Safe’ Emails First

Some SEGs will block emails from domains they have never seen before regardless of reputation. Therefore, it’s generally a good idea to send some completely benign emails with your new domain before you send out any phishing pretexts. You can address these to the ‘info@’ or similar generic mailboxes and keep the messages vague to hopefully go unnoticed/ignored by any humans on the other end. We’re just trying to prime the SEG to make a determination that emails from our domain are safe and should be allowed in the future. I’ve also heard that adding hidden links to emails with HREFs that point to your domain will also work against some SEGs, though I have not personally experimented with this technique.

Use Sendgrid, MailChimp, etc.

These services specialize in mass email delivery for their clients. They work tirelessly to keep their email servers off of block lists and help marketers deliver phishing campaigns… ahem I mean ‘advertisements’ to their targets… oops I mean ‘leads’. Once again, marketing tools are your friend when cold emailing. Keep in mind that these services don’t appreciate scammers and spammers like you, so don’t be surprised if you get too aggressive with your phishing and your account gets blocked. Yet another argument for keeping campaigns small and targeted.

Use your Registrar’s Email Service

Most domain registrars also offer hosting and email services. Some even provide a few mailboxes for free! Similar to email sending services, your registrar’s mail servers will often be delivering tons of legitimate messages for other customers and therefore have a good reputation from an IP perspective. If you take this route, be sure to let the registrar know that you are a penetration tester and which domains you will be using for what campaigns. Most I’ve worked with have an abuse email address and will honor your use case.

Use Gmail for the Long Con

Personal emails are often a blind spot for corporate email filters. Completely blocking inbound emails from Gmail, Yahoo, Outlook, etc. is a tough pill to swallow and most network administrators decide to allow them instead of answering constant requests from employees to put in an exception for this or that sender. In addition, it’s extremely difficult to apply reputation scores to individual senders. Therefore, if you can think of a pretext that would make sense to come from a Gmail account, you have a high probability of at least bypassing domain reputation checks. This can work really well for individualized campaigns where you actually have a back-and-forth conversation with the targets before link/payload delivery.

Set Up SPF, DKIM, and DMARC

For sending domains you own, it will always work in your favor to set up email security settings. Check the API documentation for your registrar and write a script to set these DNS settings.

Read the Logs!

When you do start sending emails with a domain, make sure to watch your SMTP logs carefully. They are often extremely helpful for identifying if your messages are being blocked for reasons you control. For example, I know Mimecast will frequently block emails from new senders that it hasn’t seen before, but will respond with a very helpful message explaining how to get off their ‘gray list’. The solution is simply to resend the email within the next 24 hours. If you aren’t watching the logs, you would be blocked when you could easily deliver your email.

Parting Wisdom

“What I recalled of Sunday School was that the more difficult something became, the more rewarding it was in the end.” — Big Phish


One Phish Two Phish, Red Teams Spew Phish was originally published in Posts By SpecterOps Team Members on Medium, where people are continuing the conversation by highlighting and responding to this story.

The post One Phish Two Phish, Red Teams Spew Phish appeared first on Security Boulevard.

❌
❌