How I Solve flaws2.cloud
My write-up for all levels of flaws2.cloud, both Attacker Path & Defender Path
After completing flaws.cloud a few days earlier, I got tasked to complete the sequel. Let's just jump into it.
Again, this write-up is not the "definitive" way of solving flaws2.cloud, but rather about how I approach the challenges, what I tried, what failed, and what succeeded.
Attacker Path
Level 1: Web Exploitation, Sorta-Kinda?

The first level gives us a PIN code input. To advance to the next level, the correct PIN input is required. It is also noted that the correct PIN is 100 digits long, so we can’t brute force it. Initially, I thought that this was too "guessy", since I assumed for a challenge centered on AWS there couldn’t be exploits in the client-side. Turns out, on inspect element, I found something interesting:

There are two things worth noting here:
The PIN code form will be submitted to a REST API in AWS API gateway, with a REST API ID of
2rfismmoo8, a region ofus-east-1, a stagedefault, and the resource pathlevel1. Since there’s no ‘method’ attribute in the form, the request to the API will be a GET request.There is a validation for the inputs so that it only accept numbers. For invalid inputs, the form will not be submitted, and instead a client-side alert will show up.
This basically means that, every submission with an invalid value will never go to the API. So I got curious: What would happen if we submit an invalid value submission straight to the API? I tried curl-ing.
There is so much information here, including a valuable access key, secret key, and session token. After configuring them using aws --profile flaws2-1 configure, I tried getting the identity information.
From here, we know that the identity we just got is not a user, but an assumed role (level1) with a session name called level1 as well. After this discovery, I tried many commands to no avail:
aws --profile flaws2-1 lambda list-functionsaws --profile flaws2-1 lambda get-function --function-name level1aws --profile flaws2-1 iam list-attached-role-policies --role-name level1aws --profile flaws2-1 s3 lsaws --profile flaws2-1 ec2 describe-instancesaws --profile flaws2-1 iam list-rolesaws --profile flaws2-1 iam list-policies
and many more.
After quite some time, I realized I fatally missed one important command: listing the level’s bucket. The s3 command I tried was only listing the buckets that the user have, but I forgot to try listing the level 1 bucket itself…..
We have the second level.

Level 2: Thank God, I Understand Containerization
This next level is running as a container at http://container.target.flaws2.cloud/. Just like S3 buckets, other resources on AWS can have open permissions. I'll give you a hint that the ECR (Elastic Container Registry) is named "level2".
Alright, we’re already given a starting point. Let’s look into the ECR right away.
Knowing that the next level is running as a container, containers must’ve run from images. So the first thing I tried is enumerating available images in the ECR and gather as much information about the image as possible. The list-images command requires the --repository-name parameter, and even though we’re not informed of what the repository name is, we’re informed of the registry name in the level description and can make the educated guess that the repository name is also level2.
I tried other commands like describe-repositories and describe-registry, but the access is denied. I also tried the batch-get-image command to ensure we have full access to the image, even to its layers’ details. With this, I’ve come up with the idea to pull the image locally using Docker.
In order to do that, I must be authenticated locally on my Docker. In authenticating to ECR, we use the following format as the registry:
[AWS account ID].dkr.ecr.[region].amazonaws.com
In this case, the AWS account ID provided is 653711331788, and the region is using us-east-1. So our registry URL is 653711331788.dkr.ecr.us-east-1.amazonaws.com. For the username, we can use “AWS”. And for the password, we can get it from the get-login-password command. By using the following command:
aws --profile flaws2-1 ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 653711331788.dkr.ecr.us-east-1.amazonaws.com
I got successfully authenticated, which means I can now pull images from the given ECR. We know that the repository name is level2 and the available tag is :latest, so with the following command, we can get a local copy of the level2 image in Docker:
docker pull 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest
After successfully pulling, I ran the image using docker run -d --name flaws2-2 653711331788.dkr.ecr.us-east-1.amazonaws.com/level2:latest, then opening the container in terminal using docker exec -it flaws2-2 sh. I tried looking into the home directory, but there was nothing.

Then I remembered that this is a webserver. So I went to /var/www/html to see the served files.

In there, there’s the index.htm file, which contains the following:
With this, we have the URL for level 3: level3-oc6ou6dnkw8sszwvdrraxc5t5udrsw3s.flaws2.cloud.
After looking at the Level 2 hints, turns out this was unintended. The intended solution is to inspect the docker image, then figure out the contents of the Dockerfile to get the secret password and authenticate to the container. The inspection can be done layer-by-layer, or in my case because I use Docker Desktop, through clicking the image in the Images section.
Level 3: Web Exploitation, Again
The container's webserver you got access to includes a simple proxy that can be access with: http://container.target.flaws2.cloud/proxy/http://flaws.cloud or http://container.target.flaws2.cloud/proxy/http://neverssl.com
This is quite similar with the fifth level in the first flaws.cloud. Just to test things out, I tried to proxy to the “magic IP” that is 169.254.169.254 like in the previous flaws.cloud, but it doesn’t work. And it makes sense, since this webserver is inside a container rather than an EC2. I’ve then Googled things like metadata services for containers, also to no avail (maybe wrong keyword?). As a starting point, I went to open the first hint.
Hint 1 said that containers running via ECS on AWS have their creds at 169.254.170.2/v2/credentials/[GUID] where the GUID is found from an environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. We have our "magic IP" here.
With this, I looked up the IP address in Google and found this documentation: Amazon ECS task metadata endpoint version 2. Similar to the challenge in the first flaws.cloud, containers too, have a task metadata endpoint that can be accessed internally. Since we have a proxy, that internal endpoint becomes accessible as well. I tried accessing the endpoints there, hoping I might probably find one or more exposed environment variables, but it turns out that those are only purely metadata of the task (simply explained, like the running status of the container, etc.)
I’m not gonna lie, I was stuck for a good long while here, because I avoided opening the hints. I simply assumed that the environment variables can be exposed through an internal endpoint, just like in the first flaws.cloud. So I searched very hard for documentations and possibly other endpoints, but then I decided to open the next hint because I’ve taken too much time searching with no progress.
Hint 2 tells me something I have known for a long while instead: “environment variables exist in /proc/self/environ”.
This is literally my thought process:
Hah? Ini sih gw juga udah tau. Cuma gimana caranya bisa ngebuka itu??? Emangnya ada LFI? Lah? Bentar. Kalo hintnya ngarahin ke buka /proc/self/environ, berarti ada LFI ga sih? Lah???
Translated (sorta-kinda):
What? I’ve known this. But how do I even get access to the file??? Is there even an LFI? Huh? Wait. If the hint guides me to /proc/self/environ, doesn’t that mean there’s an LFI? Huh???
I focused too much on the AWS aspect and didn’t bother to check the webserver for vulnerabilities. Based on the second hint, it’s leading us to a potential LFI, so I went ahead and checked the deployment files. I do this by running the image from level 2 and accessing the container via terminal. The proxy is running in Python with the following script:
And yeah, there are intentional vulnerabilities here.
First of all, this is probably using a very old Python version, denoted by the use of the legacy urllib.urlopen() function. Based on the legacy urllib documentation, urllib.urlopen() will open a local file if there is no scheme identifier like “http://” in the path. This is already grounds for LFI, because if we can pass an absolute path for a file to the function call, the function will definitely open the file and return it as a response.
Second, the proxy server only removes one starting slash on self.path = self.path[1:]. If we give it two slashes, then one slash will remain, opening our way to absolute path LFI.
I continued by checking the nginx files, and found this interesting config in /etc/nginx/sites-available/default:
There are also vulnerabilities here that supports the potential LFI I found in the proxy server. The nginx config turns off merge_slashes. If that configuration is on, when we try to pass multiple slashes (for example, ///), it will be merged into one slash (/), which might stop us from doing LFI. Because it is off, the LFI is still possible.
Other than that, the proxy endpoint captures all string after the proxy/ path without restrictions and passes it to the proxy server. With this, we can definitely get the environment variables through leveraging LFI by accessing:
http://container.target.flaws2.cloud/proxy//proc/self/environ
Success! And it returns the following:
And we have what we needed:
We can continue by accessing the link in Hint 1 through the proxy:
http://container.target.flaws2.cloud/proxy/http://169.254.170.2/v2/credentials/b428381e-5bd5-41f4-a6e2-9119a24727e5
The container returns a credential we can definitely use.
I configured the credentials by running aws --profile flaws2-3 configure, then listed the user’s buckets by aws --profile flaws2-3 s3 ls.
We have the (turns out) final bucket:
the-end-962b72bjahfm5b4wcktm8t9z4sapemjb.flaws2.cloud
The End

The end of flaws2.cloud's Attacker Path.
Honestly, I was a bit disappointed that it was only three levels. Even so, I still learned so much.
Defender Path
For the defender path, we’re given IAM credentials with a “Security” role attached:
Login:
https://flaws2-security.signin.aws.amazon.com/consoleAccount ID:
322079859186Username:
securityPassword:
passwordAccess Key:
AKIAIUFNQ2WCOPTEITJQSecret Key:
paVI8VgTWkPI3jDNkdzUMvK4CcdXO2T7sePX0ddF
Objective 1: Download CloudTrail Logs
I started by configuring the credentials using aws --profile flaws-defender configure, then continued with aws --profile flaws-defender sts get-caller-identity.
The authentication is successful. We’re also told that we’re given an S3 bucket named flaws2_logs, that contains CloudTrail logs recorded during an attack from the path earlier. I downloaded the logs using aws --profile flaws-defender s3 sync s3://flaws2-logs . and got a lot of .json.gz files under the AWSLogs directory.
Objective 2: Access The Target Account
It is mentioned that the best practice is to have a separate Security account that manages the CloudTrail logs from other AWS accounts, that also have some access to the other accounts. In this objective, we’re accessing a Target account using the IAM role granting the Security account access.
In my AWS config, I already have this profile:
To add the target profile, I added a manual entry:
Then I ran aws --profile target sts get-caller-identity and got the following:
Now I’m running an assumed role temporary session. What just happened? The way I see it:
I got authenticated as the profile
flaws-defenderearlier.Creating a new entry with the
targetprofile, I usedflaws-defenderas the source profile, as a “base credentials”For the target profile, I used the
arn:aws:iam::653711331788:role/securityrole. Take note that this role is from a different account with the ID653711331788, not the security account with the ID322079859186.Essentially, this tells AWS CLI that “the Security user from account ID
322079859186(denoted by thesource_profileentry) wants to assume the Security role from account ID653711331788(by filling therole_arnentry with that role’s ARN)”Since the Security user is trusted by the role, and since the Security user has the permission to assume the role, STS gives us a temporary credential, that is an assumed role temporary session.
So now, even though we were using the Security user credentials, we /become/ someone with a Security role in the account ID
653711331788.
With this in mind, knowing that the 653711331788 account ID is the one used in the Attacker path, running aws --profile target s3 ls now should show us the buckets in the Attacker path (since we’re now someone within the 653711331788 account ID).
Objective 3: Use jq
I didn’t have jq, so I installed first. Then I ran find . -type f -exec gunzip {} \ to unarchive all the log files, and cat-ing them through jq using find . -type f -exec cat {} \; | jq '.'.
The results of those commands are nicely formatted, but there are too many information and the output is too long. I changed the jq query to only display the event names by replacing the piped jq command to jq '.Records[]|.eventName'.
Turns out, these aren’t ordered, so the jq query can be refined to include the time as well by using jq -cr '.Records[]|[.eventTime, .eventName]|@tsv' | sort in the piped part. This will output the event time and names, and since the time is in the first column, it becomes the value compared to sort. With this, we can continue adding more column to gather more info. In the Defender Path guide, the final query becomes:
The output is a little bit packed, but through the command, we know that the logs basically record any activity in the AWS infrastructure, not just the attacks. That includes other resources assuming roles, S3 requests from browsers that are listed as from “ANONYMOUS_PRINCIPAL”, and so on.
Objective 4: Identify Credential Theft
Looking at the logs, it’s apparent that the attack happened in the ListBucket event. To identify what happened, we can work from there backwards, starting from querying jq using the eventName property.
From the results, we can identify that the source of the request doesn’t come from Amazon-owned IP address. And since the identity being used is an assumed role from the role/level3, we need to look at what the role actually is for.
As we can see, this role is created for the ECS service, and the AssumeRole action inside the AssumeRolePolicyDocument is only allowed for the ECS tasks service. Still, the request comes from a non-AWS IP address, and this implies that the ECS has been hacked, though we don’t have the logs from inside the container.
Objective 5: Identify the public resource
Based on the challenges I’ve worked on, on credential theft cases, there must be at least one public service or action that becomes the vulnerability entry. In the logs, we can see that the source IP address performs other actions as well when assuming the level1 role. To identify where the public service or action came from, we need to trace it through the logs.
Before the ListBuckets action using the assumed level3 role, there are a series of actions that looks like its performed on ECR (ListImages, BatchGetImage, GetDownloadUrlForLayer). We’ll look into that starting from ListImages, because it’s safe to assume that these actions are the credential theft entrypoint, since they were the last few actions before the user assumed the level3 role.
Through the output, we know that the attacker is listing the images in the level2 repository, even though he’s assuming the level1 role. This means that there might be loose permissions in that repository. We can look into the repository’s policies.
The policy’s Principal is set to *, meaning these actions are public for the world to execute. This means that the ECR is public. It is said on the Defender Path tutorial that we can use tools like CloudMapper to scan an account for public resources (like the ECR, for example) before tracing back an attack.
Objective 6: Use Athena
In this objective, the users are instructed to use Athena, an AWS service that can be used to analyze data directly in Amazon S3 using SQL.
I personally chose not to do this objective to avoid any charges (even though that it is said that the charges, if any, will likely be small), and because it’s more or less the same like the previous objectives, just this time using SQL. Even so, it is mentioned that Athena can be more useful for incident response because we don’t need to wait for the data to load and can just query right away, as long as we have defined the appropriate table.
Closing Remarks
Personally, even though with less levels, I like flaws2.cloud more, because the levels feels like actual cloud infrastructure penetration testing (even to exploiting vulnerabilities in web containers). The flow of the attacks are fun to follow, and I learned much more new concepts. Not to mention that there's a Defender Path to educate users the basics of incident response in cloud infrastructure attacks. I hope there's a flaws3.cloud.
Last updated