Part Four
Last updated
Last updated
Modern malware constantly tries to connect to a C2 server, making outbound traffic a key way to catch undetected attacks.
Malware, like ET, "phones home," and this can be easily spotted if you look for it. Many organizations don't, thinking they're safe until proven otherwise, but it's wiser to assume they’re compromised until proven otherwise.
Track all ongoing network connections to less trusted networks like the internet. List and exclude safe ones (like VPNs), and investigate the rest. Be ready with an incident response plan!
Check your firewall for data on persistent connections. If not available, log all traffic and use a script to find traffic that crosses your network regularly. The "persistent.pl" script in /usr/local/bin on Sec-Linux-511 handles Squid proxy logs but can be modified for other logs.
After listing connections, you'll find three types: authorized, unauthorized, and suspicious ones. Set your script to ignore authorized tunnels. Fix policy issues and follow your incident response plan for malicious cases.
Rerun the script daily to find long-lasting connections. You can edit persistent.pl
in /usr/local/bin/
on your Sec-511-Linux VM.
C2 traffic uses many protocols, like IRC (a chat protocol from 1988), DNS, ICMP, and P2P tools like BitTorrent. Encrypted versions of these are becoming more common.
Malware frequently uses ICMP for C2 and to transfer data
A POS scraper sends stolen data to a dump server and uses an ICMP packet to send a status update. An ICMP listener logs this to "log.txt" and shows a message in the console.
Note the SSH banner contained in the echo reply payload. Needless to say, this is not a normal ICMP payload.
The Whitecap rules can be found at https://sec511.com/4v. They work with NIDS like Snort and Suricata. The main idea is to ignore normal ICMP echo requests and alert on others.
The Whitecap project (previously Anomalyzer) was created by the course authors. These rules have found malware and unauthorized ICMP tunnels. This is a starting list of rules for detecting unusual ICMP echo requests, and you might need to add your own. If the rules trigger on harmless requests, you can modify the "pass" rule and change the Snort ID (sid).
ICMP Echo Requests happen often on internal networks. While some are started by people, most are from enterprise apps and services that use them regularly. Brad Duncan from Palo Alto Unit 42 shares detailed reports on malware activity. In one report about Hancitor, he points out how this simple ICMP Echo Request (ping) is used.
Duncan's Hancitor writeup shows how important it is to know what normal activity looks like for your organization. It's very unusual for one device to send 1.5GB of pings by itself. Plus, the malware targets all private IP addresses, even those unrelated to the organization's internal network.
DNS is a strong tool for attackers to control and tunnel data. Many applications rely on it, but it’s often overlooked until something goes wrong. Because there are so many normal requests, most people ignore DNS security.
DNS has a unique feature that makes it useful for attackers as a command and control (C2) method. Instead of clients resolving names directly, they send requests to local DNS servers, which find the answers. This setup can create unexpected ways to communicate with the internet from restricted areas. If a server can resolve internet names, it can send or receive signals online, even if it shouldn't have internet access.
HTTP is commonly used for command and control (C2) because it blends in with regular traffic and can go through HTTP proxies. Modern malware can find and use these proxies like a browser does.
Note how aggressive the C2 traffic shown above is: Every POST shown occurred in less than 0.3 seconds, based on the pcap timestamp.
Most web browsers send "Mozilla" in their user agent string, even ones that aren't Mozilla, like Internet Explorer. This dates back to the early days of web browsers. When Netscape was launched, it was better than NCSA Mosaic because it supported frames. Netscape was originally called "Mozilla," short for "Mosaic Killer." As a result, many web servers delivered frame-enabled content to Netscape and non-frame content to other browsers, thinking they were Mosaic.
Internet Explorer (IE) added "Mozilla" to its user agent string to get websites that supported frames since it often got non-frame versions. Other browsers like Safari and Chrome did the same. However, Opera didn’t include "Mozilla" in most versions before switching from its Presto engine to the Blink engine.
If you’d like a higher-fidelity approach, you may also use Tshark:
This tells Tshark to identify all http traffic with a user_agent field and then print only the values of the fields specified (the user_agent itself).
Microsoft names its operating systems with "NT" version numbers, which appear in various places like user agent strings. This helps identify the client's operating system during analysis.
Let’s break down one of the user agents:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44
Mozilla/5.0
Application name and version. For historical reasons, Internet Explorer identifies itself as a Mozilla browser.
Windows NT 10.0
The Platform token shows the operating system and version. For example, it indicates Windows 10+, meaning the system was actually running Windows 11.
AppleWebKit/537.36
AppleWebKit is one of the major rendering engines, but notably here is for compatibility purposes.
KHTML, like Gecko
Another element that is referencing a different rendering engine, Gecko, but simply present for compatibility purposes.
Edg/95
Edg, as opposed to EdgeHTML, suggests the browser to actually be Microsoft's Chromium-based Edge browser. EdgeHTML would indicate their prior non-Chromium version of the browser.
You might wonder: "Malware can easily bypass this check by using a common string or even a real user agent like 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36 Edg/95.0.1020.44'."
Some malware acts that way, but not all do. If we have a quick and effective method, we should use it. Remember, a solution doesn't need to be perfect to be helpful, especially since no perfect NSM solution is available.
sort -u
Sort all occurrences, then identify unique occurrences.
awk '{print length, $0;}'
Print the length of each User-Agent, followed by the agent itself.
sort –nr
Sort based on the numeric count of the previous step.
To "hide in plain sight," use HTTPS. It often passes through firewalls unnoticed, making it great for command and control (C2) communication.
Malware frequently uses port 443, even if it’s not SSL/TLS traffic, because it's usually allowed out without being checked. To improve security, use a proxy to ensure HTTPS compliance and block or alert any non-SSL/TLS traffic on port 443.
Normal HTTPS involves an SSL/TLS handshake that downloads an X.509 certificate. SSL/TLS VPNs and some malware can skip this handshake by using a pre-shared key instead of exchanging certificates.
Identify all tunnels and ignore the legitimate ones, including SSL/TLS tunnels. A common malware behavior is downloading an executable through TCP port 443, often after many packets of x.509 certificates, switching to SSL/TLS much later than usual.
IBM has a great summary of the SSL/TLS exchange; malware often skips these steps:
The SSL or TLS client sends a "client hello" message that includes the SSL/TLS version, the client's preferred CipherSuites, and a random byte string for later calculations. It can also list the data compression methods the client supports.
The server replies with a "server hello" message that includes the chosen CipherSuite, a session ID, and a random byte string. It also sends its digital certificate. If it needs the client to provide a certificate, it sends a "client certificate request" with supported certificate types and acceptable Certification Authorities.
In HTTPS, the Client Hello packet normally follows immediately after the TCP handshake. Then the remainder of the SSL/TLS handshake (shown in the previous slid) follows immediately.
The PCAP shows a TCP handshake, followed by non-SSL/TLS data (a harmful payload), and then an SSL/TLS Client Hello at frame 186. This behavior is typical for many malware types, especially tools like Metasploit and Core Impact.
The difference is clear when using Wireshark’s "Follow TCP Stream." On the left, HTTPS traffic shows the key exchange and parts of the X.509 certificate, like "Google Internet Authority." In contrast, Metasploit’s Meterpreter shows a DOS executable much later, which is suspicious for "HTTPS" traffic. Many malware types behave like Meterpreter.
Malware is using Tor for privacy and security, like people do. Tor often appears as regular HTTPS traffic, which firewalls typically allow and ignore.
Malware is using encryption more to avoid detection. By monitoring X.509 certificates, we can find issues like broken trust chains, short certificates, and missing information.
Some sites monitor X.509 certificates and notify when new ones show up. This takes time but can help in battling advanced threats.
The X.509 standard outlines a common type of public key certificate. These certificates are signed by a Certificate Authority (CA). To check their validity, you can use the CA's public key to decrypt the signature and verify it.
X.509 outlines a trust system with root certificates at the top. This is different from other systems like the web of trust used in Pretty Good Privacy.
The browser creates a hash from the certificate's contents and uses the CA's public key to decrypt its digital signature, revealing the CA's hash. If the hashes match, it proves that the certificate is unchanged (integrity) and was signed by the CA (authentication).
We connected to the Alexa Top 500 internet sites via SSL and saved our handiwork to /pcaps/normal/https/alexa-top-500.pcap. We then processed the pcap with Zeek:
We then processed Zeek’s "x509.log", grabbing the issuer field:
What is wrong with these identity fields?
CN=www.c53yf7zxed2.com
CN=www.u5andbly3bbduuzvigs.com
CN=www.e3ja5vxzge.com
CN=www.wc62pgaaorhccubc.com
CN=www.wmylm3gln.com
What do you think about someone who does only what's necessary? Malware often takes this approach by omitting details like Organization and Country. For example, the malware only filled in the CN (Common Name) in the X.509 certificate, leaving the Organization and Country fields empty. The sites in the Common Name fields are also very suspicious.
Simple methods are often the best for adding certificate tracking to your NSM process. X.509 certificates with short issuer fields are suspicious. Our last lab will demonstrate how to extract these fields using Zeek. Remember, as Larry Wall said, "There is more than one way to do it." You can also use Tshark.
Then compare/contrast with Tbot (C2 via HTTPS via Tor):
Let’s break that command down:
Remove any lines beginning with a "-" (means the field was empty): grep -v ^-
Remove any lines containing a comma: grep -v ,
Simply having encrypted payloads doesn’t stop detective analytics from working. Most TLS communications, especially HTTPS, send certificates and useful info in cleartext. However, not all encrypted malicious messages provide useful data, even with cleartext certificates. If we've run out of standard techniques for analyzing certificates and domains, and we can't use TLS interception to get cleartext payloads, are we stuck? Of course, we wouldn’t have asked if we had no other tricks left.
TLS fingerprinting is similar to OS and service fingerprinting, but we can't see the actual data like we can with OS/service fingerprinting. For two systems that haven't talked before to securely exchange data, they need to share details during the TLS handshake. Let's look at the information available in the cleartext during the Client and Server Hello parts of the TLS handshake.
In TLS, the client starts the handshake with a ClientHello
message that includes:
Cipher Suites: A list of supported encryption methods.
Supported Groups: (EC)DHE groups the client can use and shares for these groups.
Signature Algorithms: Signature methods the client accepts, with an optional list for certificates.
Pre-Shared Keys: Known symmetric keys and their exchange modes.
Client Hello details depend on the operating system and client app. Since it comes before getting server info, it should be similar across different TLS connections.
Packet View: TLS Hello and Server Hello
After the Client Hello in the TLS handshake, there’s a Server Hello. This Server Hello is created in response to the Client Hello and shows the choices the server makes based on what the client offered.
TLS 1.3 Server Hello:
The server replies with a Server Hello message after receiving a Client Hello message if it can agree on handshake settings. The information in the Server Hello depends on what the Client Hello provides, making it unique for fingerprinting.
Because the Client Hello has random and changeable parts, hashing the whole packet isn't practical. However, some parts are consistent if the application architecture is the same.
The JA3 method collects decimal values from specific parts of the Client Hello packet: Version, Accepted Ciphers, Extensions, Elliptic Curves, and Elliptic Curve Formats. It joins these values in order, using a comma to separate fields and a dash to separate values within each field.
The JA3 hash stays the same for a specific client application, even when connecting to different servers with varying TLS settings. It mainly identifies the client application on the source system. One system can show multiple JA3 hashes when different applications are used to create TLS connections.
The server's reply (Server Hello) to the client's request (Client Hello) is tailored to the client's details. Different clients will get different Server Hello messages from the same server, making it harder to identify the server in TLS communications compared to the client.
JA3S fingerprints can be tricky to use, but they are helpful. A good way to use JA3S is to detect C2 communications where a compromised host uses a normal-looking client to connect to a C2 server.
Fingerprinting TLS clients and servers helps analyze encrypted communications. To do this, we need access to network data and a tool to create and log JA3(S) fingerprint hashes. Instead of standalone tools, it's better to integrate this capability with our existing tools. Many popular free or open-source analytics tools can either generate JA3(S) hashes directly or be modified to do so.
Currently, Wireshark doesn't support JA3(S) natively, but you can use an open-source plugin (ja3.lua) from GitHub. This plugin needs another one (md5.lua) that’s also on GitHub. Just place these two .lua files in Wireshark’s plugins folder to enable JA3 and JA3S fields for relevant packets.
To use JA3(S) in Tshark, you don’t need extra setup beyond what’s needed for Wireshark. Just place the Lua scripts in Wireshark's plugins folder, and they will work for Tshark too. With these scripts, you can use JA3 fields in both Wireshark and Tshark.
Zeek can use JA3 like Wireshark/Tshark, but it needs an open-source package. This package is officially supported, making it easy to enable. Just use Zeek's package manager, zkg, to set it up for JA3.
Use this zkg command to install the ja3 package. To use it with Zeek from the command line, just add 'ja3' to your command. Here’s how we use the ja3 package to analyze tbot.pcap:
After installing the ja3 package and using it, we will see new fields called ja3 and ja3s in the ssl.log.
The Zeek-cut options (-M and -F) are for making the output easier to read by showing field names and choosing a delimiter.
Suricata makes it easier to use JA3 than Wireshark and Zeek because it supports it directly. Just remember to enable JA3 in the config file. In the Suricata config file (/etc/suricata/suricata.yaml on the SEC511 VM), set ja3-fingerprints to yes.
With the config file set to enable the use of JA3, now we can process a pcap.
To print the value of the ja3.hash fields in the eve.json file, we use cut and jq.
There are several good free and open-source tools to calculate JA3 client and server hashes. Once we have these hashes, we can use them to learn more about the application linked to the hash. If we control the source system, we can find out which application made the request. We can also check services like abuse.ch’s SSLBL or JA3er for information about the JA3 hashes.
SSLBL only looks at suspicious JA3 hashes but warns that these "fingerprints" haven’t been checked against safe traffic. A .csv file of the hashes is available for offline use and updates every 5 minutes. Abuse.ch also offers an automatically updated Suricata ruleset to help detect these suspicious hashes.
If a JA3(S) hash is linked to bad traffic, it's smart to alert or block it. SSLBL from abuse.ch helps by giving ready-made Suricata rules for this.
Blocklisting has a key flaw: we need to know the bad JA3 hashes ahead of time. It's important to note that changes like version updates or modifications to a malicious app often won't affect the JA3 hash. Even though attackers are looking for ways to bypass JA3, this method is still useful.
Safelisting JA3 hashes is hard due to their growing number, but it's more feasible in tightly controlled networks with well-managed, similar devices. This works best in secure segments with trusted endpoints.
JA3 safelisting involves creating a list of known JA3 hashes to allow or ignore. New hashes trigger alerts, but in large or changing networks, false positives may occur due to missing benign hashes.
JA3 can be useful, but sometimes it’s not helpful, especially when attackers use common apps to connect to typical servers. It’s even less effective when the Client Hello part of the TLS Handshake is encrypted. Right now, this is mostly a theoretical issue, but there’s a push to eliminate all fingerprinting from the TLS Handshake.
Privacy advocates want to encrypt the Server Name Indication (SNI) in the Client Hello of TLS. SNI works like the Host field in HTTP, helping servers identify the client's target site. They aim to protect this info by encrypting it after the client and server set up a secure connection.
The push for Encrypted SNI (ESNI) has evolved to encrypt more parts of the Client Hello message, now called Encrypted Client Hello (ECH).
Cobalt Strike is a tool for penetration testing with many features for exploiting, controlling, and post-attack tasks.
Reconnaissance: Cobalt Strike identifies client apps and versions on the target system.
Covert Communication: Beacon's C2 profile mimics others using HTTP, HTTPS, DNS, and SMB for network access.
Spear Phishing: Cobalt Strike creates and sends convincing phishing emails, tracking clicks.
Collaboration: Teams use Cobalt Strike’s server to share data and manage compromised systems.
Post Exploitation: Beacon runs scripts, logs keys, captures screens, downloads files, and launches payloads.
Use Cobalt Strike for web drive-by attacks or to turn a harmless file into a trojan.
Java Applet Attacks
Microsoft Office Documents
Microsoft Windows Programs
Website Clone Tool
Browser Pivoting: Bypass two-factor authentication to access sites as your target. Reporting: Cobalt Strike creates reports with activity timelines and indicators for security teams, available in PDF or Word.
In December 2020, an extensive spying campaign was discovered that infected the popular SolarWinds network monitoring software. Investigators found that the attackers used tools like Cobalt Strike Beacon and linked the operation to Russian intelligence agents who have been using Cobalt Strike since 2018. This sophisticated attack targeted a small number of victims but was very effective, using a nearly ten-year-old tool that has gained popularity.
Cobalt Strike was mainly used by strong threat groups, like big cybercriminals (TA3546 or FIN7) and advanced threat groups (TA423 or Leviathan). Proofpoint found that from 2016 to 2018, two-thirds of Cobalt Strike campaigns came from these groups. However, this dropped significantly after 2019, with only 15 percent of campaigns linked to known threat actors.
This profile makes a "gmail.com" x.509 certificate, usually self-signed and not from Google. We’ll talk about detecting Cobalt Strike soon, focusing on self-signed certificates that pretend to be from well-known websites.
Use this filter: http.request.method == "POST"
. Right-click a packet and choose Follow -> TCP Stream to view the response.
Objectives:
Analyze a client-side exploit.
Identify suspicious User Agents.
Identify short SSL certificate issuer fields
Perform hands-on analysis using Security Onion, Zeek, and Wireshark
Challenges:
The following questions are based on a client-side exploit. A user clicked on a suspicious email received on June 16 th, 2023, and clicked on the attachment. The PC then connected to two remote servers. Security Onion contains useful alerts, and a full packet capture of the incident is available at: /nsm/import/48fdf1a2f6c17303d50a625995ab70ff/pcap/data.pcap
Security Onion references a "Downloader". What are the IP addresses and virtual host names (as shown by the HTTP client "Host" header) of the malicious web servers in this alert?
What is the name of the first EXE transferred during these client-side exploits
What Microsoft client operating system is running on 10.5.11.57? Be as specific as possible.
The client attempts to POST to 14 different servers using an IP address in the client HTTP host header. What type of malware are these posts associated with?
conduit.pcap contains one suspicious User-Agent, and trickbot.pcap contains two. Identify these suspicious User-Agents. (Analysis of /pcaps/conduit.pcap and /pcaps/trickbot.pcap)
Create a file containing the unique SSL certificate issuers present in both /pcaps/normal/https/alexa-top-500.pcap and /pcaps/tbot.pcap (Analysis of /pcaps/normal/https/alexa-top-500.pcap and /pcaps/tbot.pcap)
Identify the shortest unique SSL certificate issuer in both pcaps. List the length of each shortest issuer in bytes. Omit empty issuers (listed as '-' by Zeek). This happens for attempted TCP port 443 connections that send no data (such as connections that are refused by the server).
Q1) Security Onion references a "Downloader". What are the IP addresses and virtual host names (as shown by the HTTP client "Host" header) of the malicious web servers in this alert?
Let's start by searching for the word "Downloader" in all alerts using the Security Onion Hunt menu.
There are two alerts: one named "ET MALWARE WS/JS Downloader Mar 07 2017 M1" and another called "ET MALWARE Terse alphanumeric executable downloader, likely hostile." The server IPs involved are 213.136.26.180 and 94.152.8.57.
Let's click on the alert for the IP 213.136.26.180, select Actions -> PCAP, and download the PCAP file. We'll open it in Wireshark, right-click on any frame, and choose Follow -> TCP Stream.
The virtual host name is "lifecoachingveronique.be".
Now, let's click on either of the alerts for 94.152.8.57, and follow the same process to view the TCP stream:
Answer: lifecoachingveronique.be -> 213.136.26.180, spugoszcz.brzuze.e -> 94.152.8.5
Q2) What is the name of the first EXE transferred during these client-side exploits?
The connection to spugoszcz.brzuze.eu indicates that the executable name is "exe1.exe".
Answer: exe1.exe
Q3) What Microsoft client operating system is running on 10.5.11.57? Be as specific as possible.
The previous screenshot shows that the client's Windows NT kernel version is "Windows NT 6.1." This version corresponds to Windows 7 or Server 2008 R2. Since the question asked for the "Microsoft client operating system," we can conclude it is Windows 7.
Answer: Windows 7
Q4) The client attempts to POST to 14 different servers using an IP address in the client HTTP host header. What type of malware are these posts associated with?
Let's go back to the Hunt menu, search for the client IP address (10.5.11.57), and group the results by rule name.
There are 14 alerts for "ET HUNTING GENERIC SUSPICIOUS POST to Dotted Quad with Fake Browser 1" and 14 alerts for "ETPRO MALWARE WIN32/KOVTER.B Checkin 2 M1."
Let's use this search to list those alerts only:
Let's Scroll down to check the events and make sure each "KOVTER" alert is linked to a "Dotted Quad" alert.
Each "KOVTER.B" alert is linked to a "Dotted Quad" alert, showing that the suspicious POSTs are connected to KOVTER.
Q5) What is the most suspicious User-Agent string contained in each pcap?
Analysis of /pcaps/conduit.pcap and /pcaps/trickbot.pcap
In both situations, the shortest User-Agents seem the most suspicious. Let's open a Linux terminal and enter this:
We can see how both short User-Agents lack the string "Mozilla" and "CryptoAPI."
We can open both pcaps in Wireshark and search for the strings to see their context. To view conduit.pcap, let's follow these steps in a Linux terminal:
To search, let's go to "Edit" and click on "Find Packet." We have to make sure to set "Display Filter" to "String" next to the search box and change "Packet List" to "Packet bytes" on the far left. Then, type "FDMuiless" in the search box. The background should turn green.
Q6) Create a file containing the unique SSL certificate issuers present in both /pcaps/normal/https/alexa-top-500.pcap and /pcaps/tbot.pcap
Let's create a directory named "/tmp/zeek." Next, we'll run Zeek on the file "alexa-top-500.pcap," and use zeek-cut to find all SSL certificate issuers. Identify the unique issuers and save them to "/tmp/alexa.txt."
The shortest issuer is 30 bytes. The issuer shown as '-' is empty because the connection attempt to TCP port 443 was reset by the server and didn’t send any data.
Let's do the same for /pcaps/tbot.pcap.
The shortest issuer is 19 bytes.