How I Solve flaws.cloud
My write-up for all levels of flaws.cloud
A while back, I worked on a technical test for a Cloud Security internship, where I was given a (bonus) task to complete flaws.cloud. flaws.cloud is a web-based cloud infrastructure exploitation series of challenges focused specifically on Amazon Web Services (AWS). Each level is designed to simulate common misconfigurations and security pitfalls in AWS, requiring players to identify weaknesses, escalate privileges, and pivot within a single AWS account in order to progress to the next level.
At the time of doing the technical test, I didn’t get around to completing flaws.cloud due to time restrictions. Fortunately, I got the internship anyway (yay 🎉), and I was then tasked to complete flaws.cloud. This write-up is not the "definitive" way of solving flaws.cloud, but rather about how I approach the challenges, what I tried, what failed, and what succeeded.
Level 1: flaws.cloud
This level is *buckets* of fun. See if you can find the first sub-domain.
Early on, I didn’t know where to start, so I started by opening the hint. Hint 1 said that flaws.cloud is hosted as an AWS S3 bucket. We can verify it by looking up the DNS by nslookup or dig. nslookup gives an IP address 52.92.176.179, and nslookup-ing that IP address it gives us results related to AWS:
> nslookup 52.92.176.179
179.176.92.52.in-addr.arpa name = s3-website-us-west-2.amazonaws.com.
Authoritative answers can be found from:
92.52.in-addr.arpa nameserver = x2.amazonaws.com.
92.52.in-addr.arpa nameserver = x3.amazonaws.org.
92.52.in-addr.arpa nameserver = pdns1.ultradns.net.
92.52.in-addr.arpa nameserver = x4.amazonaws.org.
92.52.in-addr.arpa nameserver = x1.amazonaws.com.From this, we obtain the information that flaws.cloud is hosted on the us-west-2 region. Since we know that flaws.cloud is an S3 static site, then it should also be able to be accessed in the following URL format:
http://<BUCKET_NAME>.s3-website..amazonaws.com
In this case, it will be http://<BUCKET_NAME>.s3-website.us-west-2.amazonaws.com. This is because AWS S3 buckets that are used for static hosting are given a domain so that we can access without setting up DNS. We just need to know the bucket name, and from the hint we know that it's flaws.cloud, so the S3 URL format becomes http://flaws.cloud.s3-website.us-west-2.amazonaws.com
The hint also says “permissions are a little loose”. This implies, maybe we can fetch information about the bucket through AWS CLI? Maybe we need to find another bucket, not with the name flaws.cloud this time? I was quite stuck here, so I opened the second hint.
Hint 2 said that we now know that there's a bucket named flaws.cloud in the us-west-2 region. Apparently, we can list down the contents of that bucket! Earlier, I've searched around about the commands we can use in the AWS CLI knowing the bucket name, but I thought it's only applicable to a bucket we own ourselves. Turns out I was wrong. So I can run this command:
aws s3 ls s3://flaws.cloud --no-sign-request
and this will list down what files are in the bucket. The --no-sign-request is necessary to bypass authentication/authorization, and it will not always work if there is strict security/permissions. Through running that command, we discover a page:

secret-dd02c7c.html page.Accessing the secret-dd02c7c.html page gives us the link for the next level. The level 2 page also briefly explains the level 1 challenge, where it is also mentioned that we can list S3 files through browser using the format http://[BUCKET_NAME].s3.amazonaws.com.
What just happened?
AWS S3 buckets can be set up with permissions and functionalities, including to use them in hosting static files. A misconfiguration can lead to the public internet being able to list down the contents of a bucket, which is what happened in this case.
Level 2: Same, With A Twist
The next level is fairly similar, with a slight twist. You're going to need your own AWS account for this. You just need the free tier.
This level requires my own AWS account. I assume the operations will be through AWS CLI as well. After configuring my account through aws configure, my instincts says to try the aws s3 ls command again. This time, the cloud name is the entire URL, I guess?
To verify that, I opened the URL using the s3.amazonaws.com format. It successfully opens, although the access is denied. From here, we know that the bucket name is level2-c8b217a33fcf1f839f6f1f73a00a9ae7.flaws.cloud, and it's safe to assume that every level's URL is the bucket name itself.
Because of the denied access through URL, I assumed listing will also be denied through the CLI, but I tried it anyway. I tried the ls command without the --no-sign-request argument (since my AWS CLI is now configured with access keys), and turns out, listing isn’t denied! We found the secret page and the link to level 3.

secret-e4443fc.html page.What just happened?
On the first level, I remembered having to pass the --no-sign-request argument to the directory listing. That’s because, without the --no-sign-request, a valid configured account in the AWS CLI is needed to list the directories. Now, since I have an account, my intuition just says that maybe I no longer need that argument. To verify my suspicion, I tried again to run the ls command with the --no-sign-request argument, and the access is denied. The Level 3 page verifies my intuition. The level 2 bucket is misconfigured to allow content listing for any authenticated AWS user. A bit similar to the previous level that allowed content listing to everyone.
Level 3: Finally, Real Access
The next level is fairly similar, with a slight twist. Time to find your first AWS key! I bet you'll find something that will let you list what other buckets are.
With the level 3 URL in the picture above, we know that the level 3 bucket is called level3-9afd3927f195e10225021a578e6f78df.flaws.cloud. My first action was listing the bucket contents.
Okay, this is interesting. We have a .git directory. I don’t know whether that will be useful for the level 3 journey, but the rest seems useless. But it's worth noting that the challenge description said we need to find an AWS key. I continued by trying to list the objects in the .git directory.
Well, the .git directory contains the standard Git stuff, nothing suspicious. But it's safe to say that it's a valid Git repository, looking at the contents. With this, I have an idea: since it’s a Git repository, it might have some history. What if we copy the entire bucket to access it as a local Git repository, then check the Git commits to hopefully find something?
Thankfully, with a Google search, we know that there’s a command to download objects from an AWS bucket, that is cp. My theory, we can download only the .git repository, making the current working directory a Git repo, then go back in time (commits) to see whether there are any files being deleted/redacted.
I tried the following command:
aws s3 cp s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/.git/ ./
but then I got an error:
My first assumption says that we can’t download directories in buckets right away. Through the bucket listing and the .git directory listing, I also assume that objects with the “PRE” prefix means that it’s a directory. With this, I needed to find a way to download the entire bucket, or just the .git directory.
After some search, turns out, the command that I have been using is wrong. cp is for downloading/copying (hence the name cp) a single object. To download the entire bucket, we can use the command sync, that is essentially used to sync a local directory to a bucket and the other way around, making sure the local directory and the bucket have the same files and is in the same state. I tried the following command:
aws s3 sync s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud ./
then got these output:
The command successfully synced my local directory with the bucket, and the entire bucket is now in a local git repo. Running git log --oneline shows that there’s a first commit before the one in the bucket.
I checked out to that commit (commit f52ec03) and found a credentials file, access_keys.txt.
Perfect. I used the access key through aws configure, using the us-west-2 region (this information is from previous levels). After configuring, I ran aws sts get-caller-identity to verify that the authentication is successful.
We have been successfully authenticated as a user named backup. Now, let's go back to the earlier instructions:
...I bet you'll find something that will let you list what other buckets are.
Since we’re now authenticated as the backup user, I assume we can list the buckets that this user/account has, list that bucket, and go to level 4. To list all buckets of an authenticated user (assuming we’ve configured the credentials), we can run the command aws s3 ls.
Okay. We found the level 4 bucket. But apparently, we also found the names of the other buckets, including the level 5, 6, and end buckets. I'm not entirely sure this is supposed to be intentional, but to keep the flow, let's continue to level 4.
Turns out, it *was* intentional. The next level of challenges are hosted on those buckets, sure, but they're hosted on a subdirectory.
What just happened?
People can often leak AWS keys, whether accidentally or intentionally. It’s actually not a huge mistake, AS LONG AS we revoke the keys as soon as we know that it's leaked. That way, even if an attacker found a way to recover those keys, it can no longer be used.
Level 4: We're Not In Buckets Anymore
For the next level, you need to get access to the web page running on an EC2 at 4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud
It'll be useful to know that a snapshot was made of that EC2 shortly after nginx was setup on it.
Opening the site leads us to page protected with basic auth.

The main idea or objective here is to get the credentials to access that web page. I tried listing the level 4 bucket first, like usual. Turns out, the bucket of level 4 isn’t allowed to be listed, so obviously we're not going to find anything there. What about the EC2? Obviously, it’s not an S3, so we can’t use the s3 command(s), but what if we use other commands available in AWS CLI for ec2?
Through some research, I found out that EC2 uses AWS EBS (Elastic Block Store) service as a persistent storage. Any snapshots that has been made, in my assumption, is stored there, or rather *the* EBS being used for the EC2 is the one being snapshotted. Through another Google search, there is an AWS command under ec2 called describe-snapshots, that will “Describe(s) the specified EBS snapshots available to you or all of the EBS snapshots available to you”. I tried to run it first without any arguments: aws ec2 describe-snapshots.
Aaand, turns out, there were WAY too many snapshots, so I'm not pasting it here. I think the intended way is we need to pass the ID of the EC2 instance as an argument, so only the snapshots of that EC2 is being shown.
I continued by gathering more information about the EC2 instance first (this should've been the first thing I've done, lol). My idea: we have the (sub)domain, we can dig/nslookup it to get the IP address, then use another ec2 command to get the instance's description by filtering based on the IP we found.
We have the IP address of the EC2, 54.202.228.246. With that IP address and some Google search, we can use the describe-instances command from ec2 with the --filter argument, ip-address as the Name to be filtered, and that IP address as the Value. The command would be the following, along with the output I’ve get. I've omitted some parts of the output and only show some of the important informations.
Perfect. We have the EBS volume ID. Hopefully, we can use that ID to filter only the snapshots of that specific volume with that previous command. Also, we have the LaunchTime information, which might also help considering the level instructions are “...shortly after nginx was setup on it.”, indicating time.
I went back to the describe-snapshots command, this time using the volume ID as a filter.
Perfect. We got the snapshot we needed and all of its information. Now, I'm not going to lie, I was stuck for quite some time here, so I went and opened the hints.
Reading Hint 1 made me realize that... I ridiculously overengineered. We can actually just run describe-snapshots by passing the --owner-id argument, using the account ID we get from calling aws sts get-caller-identity. So the following command would yield the same results:
aws ec2 describe-snapshots --owner-id 975426262029
But this hint doesn't lead me nowhere as I already have the snapshot description, so I continued to the second hint.
Hint 2 tells us that we need to mount the snapshot into an instance. To do this, we must create a volume using that snapshot, then mount the volume into an EC2 instance.
Now, honestly, I wasn't confident enough to fire up a volume and an EC2 since I have a Free Tier AWS account. I've read somewhere that creating a volume using a snapshot is not covered in the Free Tier plan (correct me if I'm wrong!), so it's possible for me to get some billing. To avoid them, I decided to play it safe and complete this level through the hints. But, the grand idea (that I would definitely try if I have a safe amount of money to avoid AWS billing) is creating a volume using the snapshot, mount it to an EC2 instance, then just explore the mounted snapshot. I would look in the /home directory, webserver directory, or nginx directories when deployment is done in a bare-metal manner.
Reading Hint 3 and Hint 4 verifies my idea, because there are configuration files in the user's home directory, in a file called setupNginx.sh. There's this following line int he code:
From here, we know that the username is flaws and the password is nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M. I used these to gain access to the EC2 webserver, and I successfully got authenticated. The page contains the link for Level 5, and we can continue from here.

What just happened?
AWS lets us make snapshots of EC2s and databases (RDS). Usually, they're for backups, but there are other use cases, like for regaining access to an EC2 that has a forgotten password. Normally, snapshots are restricted to a user's own account, and in this level, we can gain access to the snapshot because we have the credentials for the backup account. With that access, we can fire up an EC2 with that snapshot as the volume, and gain access to the contents.
Level 5: Proxy Party
This EC2 has a simple HTTP only proxy on it. Here are some examples of it's usage:
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/flaws.cloud/
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/summitroute.com/blog/feed.xml
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/neverssl.com/
See if you can use this proxy to figure out how to list the contents of the level6 bucket at level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud that has a hidden directory in it.
Because we're given a proxy using the EC2 from level 4, it's safe to assume that the level 5 bucket would be useless, so I didn't try listing it down. Also, we are already given the level 6 bucket name here (level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud), but accessing it tells us that the actual level is hosted in a subdirectory.

I tried listing the bucket using the backup user we received earlier to see available subdirectories, but that also doesn’t work because the access is denied. With these limitations, the only way is perhaps by (somehow) getting another identity from the EC2 proxy, or list the level 6 bucket through the proxy. But I’m kind of blind here, so I’m gonna start with the first hint.
Reading Hint 1 made me learn something new: On every cloud services (such as AWS), there is a special IP address, 169.254.169.254, that is reserved as a metadata service. The metadata service is accessible only inside an EC2 (or the equivalent of EC2s in other cloud services), so it's an internal service. But in this case, we have a proxy through an EC2, which makes that IP address accessible to the public.
Author's Note: I think it is assumed that flaws players have heard of the metadata service before. If it weren't for the hint, I don't think I'd get anywhere in this level because I have never heard of the metadata service and the magic IP address.
As first exploration, I tried accessing that IP address through the proxy:
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254
It returns the following HTML response:
which I assume is some kind of versioning log. But not only that, since the "magic" IP address is a metadata service, this might work like an index. So I assume, if I append any of those versions to the URL, I will be directed to another page. So I tried appending “latest” (assuming the currently deployed flaws.cloud is the latest version) to the URL, and got these results:
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest
My assumption was correct. Those versions store the specific metadatas for the selected version. And inside the versions, obviously, meta-data is interesting.
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data
It returns the following HTTP response:
We have a lot of information here! Which I assume are the metadatas of the EC2. Fuzzing the endpoints one by one to get information would take too much effort, so I tried reading the metadata service documentation. One thing stood out for me:

iam/security-credentials/role-name metadata endpoint.This endpoint will contain the temporary security credentials associated with any role that is available there. This will definitely be useful, but since I don’t know what roles exist in the EC2, I tried to access that URL by emptying the role-name. The HTML returned only one entry: flaws, which I assume is the role name available. So with that, I visited this URL:
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest/meta-data/iam/security-credentials/flaws
and it returns the security credentials (complete access key & secret) for the flaws role in the EC2.
After setting these up in AWS CLI using aws --profile flaws5 configure, I tried listing the level 6 bucket. The listing was successful, and I discovered the subdirectory for level 6, that is ddcc78ff.
With this, level 6 is accessible in http://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/.
What just happened?
Turns out, majority of cloud services implemented the special IP address (169.254.169.254) as the metadata service, so it’s not only in AWS (there’s even an RFC for it). On other cloud services, there are security constraints like special headers needed to access the metadata service. Same thing in AWS, it requires a special header, a challenge, and a response. However, some AWS accounts don’t automatically enforce those rules. So if there’s a way to access the IP from an EC2 (in this level’s case, using proxy), some informations on the EC2 can be extracted.
Level 6: Naughty Auditor
To Be Written.
Last updated